Canonical Voices

Posts tagged with 'canonical'

Matt Fischer

There are a myriad of ways to do cross-compiles and a smaller myriad that can do chrooted debian package builds. One of my favorite tools for this is pbuilder and I’d like to explain how (and why) I use it.

A pbuilder environment is a chrooted environment which can have a different distroseries or architecture than your host system. This is very useful, for example, when your laptop is running raring x64 and you need to build binaries for saucy armhf to run on Ubuntu Touch. Typically pbuilders are used to build debian packages, but they can also provide you a shell in which you can do non-package compilations. When you exit a pbuilder (typically) any packages you’ve installed or changes you’ve made are dropped. This makes it the perfect testing ground when building packages to ensure that you’ve defined all your dependencies correctly. pbuilder is also smart enough to install deps for you for package builds, which makes your life easier and also avoids polluting your development system with lots of random -dev packages. So if you’re curious, I recommend that you follow along below and try a pbuilder out, it’s pretty simple to get started.

Getting Setup

First install pbuilder and pbuilder-scripts. The scripts add-on really simplifies setup and usage and I highly recommend it. This guide makes heavy use of these scripts, although you can use pbuilder without them.

sudo apt-get install pbuilder pbuilder-scripts

Second, you need to setup your ~/.pbuilderrc file. This file defines a few things, mainly a set of extra default packages that your pbuilder will install and what directories are bind-mounted into your pbuilder. By default pbuilder scripts looks in ~/Projects, so make that directory at this point as well and set it in the .pbuilderrc file.

Add the following to .pbuilderrc, substitute your username for user:

BINDMOUNTS="${BINDMOUNTS} /home/user/Projects"
EXTRAPACKAGES="${EXTRAPACKAGES} pbuilder devscripts gnupg patchutils vim-tiny openssh-client"

I like having the openssh-client in my pbuilder so I can copy stuff out easier to target boxes, but it’s not strictly necessary. A full manpage for ~/.pbbuilderrc is also available to read about setting more advanced stuff.

Don’t forget to make the folder:
mkdir ~/Projects

Making your First Pbuilder

Now that you’re setup, it’s time to make your first pbuilder. You need to select a distroseries (saucy, raring, etc) and an architecture. I’m going to make one for the raring i386. To do this we use pcreate. I use a naming scheme here so that when I see the 10 builders I have, I can keep some sanity, I recommend you do the same, but if you want to call your pbuilder “bob” that’s fine too.

cd ~/Projects
pcreate -a i386 -d raring raring-i386

Running this will drop you into an editor. Here you can add extra sources, for example, if you need packages from a PPA. Any sources list you add here will be permanent anytime you use this pbuilder. If you have no idea what I mean by PPA, then just exit your editor here.

At this point pcreate will be downloading packages and setting up the chroot. This may take 10-30 minutes depending on your connection speed.

This is a good time to make coffee or play video games

This is a good time to make coffee or play video games

Using your pbuilder

pbuilders have two main use cases that I will cover here:

Package Builds

pbuilder for package builds is dead simple. If you place the package code inside ~/Projects/raring-x86, pbuilder will automagically guess the right pbuilder to use. Elsewhere and you’ll need to specify.

Aside: To avoid polluting the root folder, I generally lay the folders out like this:

~/Projects/raring-i386/project/project-0.52

Then I just do this


cd ~/Projects/raring-i386/project/project-0.52
pbuild

This will unpack the pbuilder, install all the deps for “project” and then attempt to build it. It will exit the pbuilder (and repack it) whether it succeeds or fails. Any debs built will be up one level.

Other – via a Shell

The above method works great for building a package, but if you are building over and over to iterate on changes, it’s inefficient. This is because every time it needs to unpack and install dependencies (it is at least smart enough to cache the deps). In this case, it’s faster to drop into a shell and stay there after the build.

cd ~/Projects/raring-i386
ptest

This drops you into a shell inside the chroot, so you’ll need to manually install build-deps.

apt-get build-dep project
dpkg-buildpackage

ptest also works great when you need to do non-package builds, for example, I build all my armhf test code in a pbuilder shell that I’ll leave open for weeks at a time.

Updating your pbuilder

Over time the packages in your pbuilder may get out of date. You can update it simply by running:

pupdate -p raring-i386

This is the equivalent of running apt-get upgrade on your system.

Caveats

A few caveats for starting with pbuilder.

  • Ownership – files built by pbuilder will end up owned as root, if you want to manipulate them later, you’ll need to chown them back or deal with using sudo
  • Signing – unless you bind mount your key into your pbuilder you cannot sign packages in the pbuilder. I think the wiki page may cover other solutions.
  • Segfaults – I use pbuilders on top of qemu a lot so that I can build for ARM devices, however, it seems that the more complex the compile (perhaps the more memory intensive?) the more likely it is to segfault qemu, thereby killing the pbuilder. This happened to a colleague this week when trying to pbuild Unity8 for armhf. It’s happened to me in the past. The only solution I know for this issue is to build on real hardware.
  • Speed – For emulated builds, like armhf on top of x86_64 hardware (which I do all the time), pbuilds can be slow. Even for non-emulated builds, the pbuilder needs to uncompress itself and install deps every time. For this reason if you plan on doing multiple builds, I’d start with ptest.
  • Cleanup – When you tire of your pbuilder, you need to remove it from /var/cache/pbuilder. It also caches debs in here and some other goodies. You may need to clean those up manually depending on disk space constraints.

Summary

I’ve really only scratched the surface here on what you can do with pbuilder. Hopefully you can use it for package builds or non-native builds. The Ubuntu wiki page for pbuilder has lots more details, tips, and info. If you have any favorite tips, please leave them as a comment.

Read more
Matt Fischer

This week I’ve been hacking some of the initrd scripts in Ubuntu Touch and I thought that I’d share some of the things I learned. All of this work is based on using Image Update images, which are flashable by doing phablet-flash ubuntu-system. First, why would you want to do this? Well, the initrd includes a script called “touch” which sets up all of the partitions and does some first boot migration. I wanted to modify how this process works for some experiments on customizing the images.

Before getting started, you need the following packages installed on your dev box: abootimg, android-tools-adb, android-tools-fastboot

Note: I was told after posting this that it won’t work on some devices, including Samsung devices, because they use a non-standard boot.img format.

Getting the initrd

The initrd is inside the boot.img file. I pulled mine from here, but you can also get it by dding it off of the phone. You can find the boot partition on your device with the following scriptlet, taken from flash-touch-initrd:

for i in $BOOT; do                                                              
    path=$(find /dev -name "*$i*"|grep disk| head -1)                           
    [ -n "$path" ] && break                                                     
done
echo $path

Once you have the boot.img file by whatever means you used, you need to unpack it. abootimg is the tool to use here, so simply run abootimg -x [boot.img]. This will unpack the initrd, kernel and boot config file.

Unpacking and Hacking the initrd

Now that you have the initrd, you need to unpack it so you can make changes. You can do this with some cpio magic, but unless you have a UNIX-sized beard, just run abootimg-unpack-initrd . This will dump everything into a folder named ramdisk. (UNIX beard guys: mkdir ramdisk; cp initrd ramdisk; cd ramdisk; cat initrd | gzip -d | cpio -i)

To make changes, simply cd into ramdisk and hack away. For this example, I’m going to add a simple line to ramdisk/scriprts/touch. My line is

echo "mfisch: it worked!" > /dev/kmsg || true

This will log a message to /var/log/kern.log which can assist us to make sure it worked. Your change will probably be less trivial.

Repacking

Repacking the initrd is simple. To repack, just run abootimg-pack-initrd [initrd.img.NEW] Once you do this you’ll notice that the initrd size is quite different, even if you didn’t make any changes. After discussing this with some people, the best I can figure is that the newly packed cpio file has owners and non-zero datestamps, which make it slightly larger. One clue, when compared to mkinitramfs, abootimg-pack does not use the -R 0:0 argument and there are other differences. If you want to do this the hard way, you can also repack by doing: cd ramdisk; find . | cpio -o -H newc | gzip -9 > ../initrd.img.NEW

Rebuilding the boot image

The size change we discussed above can be an issue that you need to fix. In the file bootimg.cfg, which you extracted with abootimg -x, there is a line called bootsize. This line needs to be >= the size of the boot.img (not initrd). If the initrd file jumped by 4k or so, like mine did, be sure to bump this as well. I bumped mine from 0x837000 to 0x839000 and it worked. If you don’t do this step, you will wind up with a non-booting image. Once you correct this, rebuild the image with abootimg:

abootimg --create saucy-new.img -f bootimg.cfg -k zImage -r initrd.img.NEW

I’ve found that if your size is off, it will sometimes complain during this step, but not always. It’s best to check the size of saucy-new.img with the line you changed in bootimg.cfg at this point.

Flashing and testing

To flash the new boot image, reboot the device and use fastboot.

adb reboot bootloader
fastboot flash boot saucy-new.img

Use the power button to boot the device now.

Once booted you can go check out the kern.log and see if your change worked.

Aug 13 16:11:04 ubuntu-phablet kernel: [    3.798412] mfisch: it worked!

Looks good to me!

Thanks to Stephane Graber and Oliver Grawart for helping me discover this process.

Read more
Matt Fischer

Over the past few months, I’ve been working on a dbus service (powerd) for Ubuntu Touch. Something that came up recently was the need to get the PID of the processes that call us. We were using this for statistics purposes of tracking who was holding requests, until today, when we decided to go a different direction. So this code is not landed in powerd, but perhaps it is still useful to someone. So I present, how to get the PID and process name from someone that calls you on dbus, in C.

This code assumes a few things. You need to have a working server that handles a call of some sort. We will plug into that call to get the PID of the caller. With that in mind, let’s get started. If you want the version of powerd that does this full async, it’s here: lp:~mfisch/+junk/powerd-pids. Note that this code also incorporates some statistics creation for powerd that is not going to be put into trunk in the form that it is in this branch. Anyway, onto the code:

Create a dbus proxy to make the PID look-up request to

We need a dbus proxy object to talk to. This is the service where we can lookup the PID given then dbus name of the connection. I will connect to this proxy asynchronously. In my “main”, I start the connection:

    /* proxy for getting PID info */
    g_dbus_proxy_new_for_bus(G_BUS_TYPE_SYSTEM,
        G_DBUS_PROXY_FLAGS_DO_NOT_LOAD_PROPERTIES,
        NULL,
        "org.freedesktop.DBus",
        "/org/freedesktop/DBus",
        "org.freedesktop.DBus",
        NULL,
        (GAsyncReadyCallback)dbus_proxy_connect_cb,
        NULL);

And then finish it later, the main result here is that dbus_proxy is set so I can use it.:

void
dbus_proxy_connect_cb(GObject *source_object,
               GAsyncResult *res,
               gpointer user_data)
{
    GError *error = NULL;

    dbus_proxy = g_dbus_proxy_new_finish (res, &error);
    if (error) {
        g_warning("dbus_proxy_connect_cb failed: %s", error->message);
        g_error_free(error);
        dbus_proxy = NULL;
    }
    else {
        g_debug("dbus_proxy_connect_cb succeeded");
    }
}

In the call that your service handles, do the lookup synchronously

I have a synchronous lookup listed first, then an async one. You should use the async one because you’re a good coder… unless you need to block until you find out who is calling you for some reason. I’ve left some powerd-isms for the function call, the source is from the requestSysState method that powerd supports. We will use the dbus_proxy object we created above to request the PID.

gboolean                                                                       
handle_request_sys_state (PowerdSource *obj, GDBusMethodInvocation *invocation, int state)
{
    // get the name of the dbus object that called us
    owner = g_dbus_method_invocation_get_sender(invocation);
    if (dbus_proxy) {
        result = g_dbus_proxy_call_sync(dbus_proxy,
                "GetConnectionUnixProcessID",
                g_variant_new("(s)", owner),
                G_DBUS_CALL_FLAGS_NONE,
                -1,
                NULL,
                &error);
        if (error) {
            g_error("Unable to get PID for %s: %s", owner, error->message);
            g_error_free(error);
            error = NULL;
        }
        else {
            g_variant_get(result, "(u)", &owner_pid);
            g_info("request is from pid %d\n", owner_pid);
        }
    }
    ...
}

Once we have the PID, we can lookup the command line by reading /proc/PID/cmdline, my powerd code does this in the async example below.

async dbus for fun and profit

As I stated, synchronous is bad because it makes everyone wait, so here’s the async version.

gboolean                                                                       
handle_request_sys_state (PowerdSource *obj, GDBusMethodInvocation *invocation, int state)
{
    // get the name of the dbus object that called us
    owner = g_dbus_method_invocation_get_sender(invocation);
    g_dbus_proxy_call(dbus_proxy,
        "GetConnectionUnixProcessID",
        g_variant_new("(s)", dbus_name),
        G_DBUS_CALL_FLAGS_NONE,
        -1,
        NULL,
        (GAsyncReadyCallback)get_pid_from_dbus_name_cb,
        NULL);
    ...
}

Here’s our callback where we handle the results, I left the code in that reads the process name from /proc. We have a utility function called sysfs_read that I used.

void
get_pid_from_dbus_name_cb(GObject *source_object,
               GAsyncResult *res,
               gpointer user_data)
{
    GError *error = NULL;
    GVariant *result = NULL;
    guint pid;
    gchar process_name[PROCESS_NAME_LENGTH] = "";
    gchar proc_path[64] = "";
    int ret;

    result = g_dbus_proxy_call_finish (dbus_proxy, res, &error);
    if (error) {
        powerd_warn("get_pid_from_dbus_name_cb failed: %s", error->message);
        g_error_free(error);
    }
    else if (result) {
        g_variant_get(result, "(u)", &pid);
        g_variant_unref(result);
        /* safety check */
        if (pid != 0) {
            sprintf(proc_path, "/proc/%u/cmdline", pid);
            ret = sysfs_read(proc_path, process_name, PROCESS_NAME_LENGTH);
            if (ret < 0)
            {
                powerd_debug("error reading process name from %s: %d",
                    proc_path, ret);
                strcpy(process_name, "UNKNOWN");
            }
            g_debug("PID: %u, Process Name: %s", pid, process_name);
        }
        else {
            /* not sure this can happen */
            powerd_debug("unable to get pid info");
        }
    }
}

With that magic, I can get output like this:

PID: 4434, Process Name: ./powerd-cli
PID: 4436, Process Name: ./powerd-cli
...

But what about Python?

C is too hard you say. If you got carpal tunnel just from reading that code, I have a simple python call to do this for you, synchronously.

#!/usr/bin/python

import dbus
import sys

bus=dbus.SystemBus().get_object('org.freedesktop.DBus', '/org/freedesktop/DBus');
print dbus.Interface(bus, 'org.freedesktop.DBus').GetConnectionUnixProcessID(sys.argv[1]);

I use dbus-monitor to find someone interesting on dbus with this call. :1.17 looks like upower, so lets see if it worked:

mfisch@caprica:~$ ./foo.py :1.17
1988
mfisch@caprica:~$ cat /proc/1988/cmdline
/usr/lib/upower/upowerd

Looks right to me!

With the python, you can plug in the caller’s dbus name for “sys.argv[1]” and be on your way, or use the C code if you don’t want python’s overhead and think that managing pointers is entertaining.

Special thanks to Ted Gould who pointed me to this method.

Read more
Matt Fischer

Being a MOTU

Back in October, I wrote a post about my process of becoming a MOTU. I’ve been pretty busy since October. First of all, I had this 9 month build finally finish:

Successfully signed dsc and changes files

Successfully signed dsc and changes files

Once things sort of settled down from that, I jumped back in to updating and syncing packages. This time I was mainly focusing on desktop packages, because that’s the group my mentor worked on. However, I wanted to get some different experiences, so I also worked on some new debian packages (one of which landed).

So after all this, I talked to a few people and it was suggested that I apply for MOTU. So I cleaned up my wikipage and applied for it. The DMB had a lot of questions in the meeting, but I guess I was persuasive enough because I was approved on June 6!

So what’s next? Personally, I want to keep doing updates, complete a SRU, land my other debian package, sponsor some packages, and help other people achieve their goal of being a MOTU also.

I feel that mentoring is probably one of the most important parts of being a MOTU, so even though I’m new, I’d love to help where I can. I can help by answering questions or helping with ideas of things to work on. Finding the work can sometimes be the hardest part, and the only path forward to becoming a MOTU is doing updates and syncs, so it’s critical to keep up the momentum. So if you’re working on this goal, find me on #ubuntu-motu as mfisch and we can chat.

Read more
Matt Fischer

The past few weeks I’ve been on loan to work on Ubuntu Touch, specifically the power daemon, powerd. Seth Forshee and I have been working to enhance the power daemon so that system services can interact with it to request that the device stay active, that is, that the device not suspend. The initial round of this work is complete and is landing today. (Note: There is a lot of low-level kernel interaction stuff landing in the code today too, that is not covered here)

What’s Landing

What’s landing today allows a system service, talking on the system bus, to request the Active system power state. We currently only have two states, Active and Suspend. When there are no Active state requests, powerd will drop the state to Suspend and suspend the device. This is best illustrated by showing how we use the states internally: For example, the user activity timer holds an Active state request until it expires at which point the request is dropped. The system then scans the list of outstanding state requests and if none are left, it drops the system to Suspend and suspends the system. Pressing the power button works in the same way, except as a toggle. When the screen is on, pressing the power button drops an active request, when off, it makes an active request.

For now, this ties screen state to system power state, although we plan to change that later. There is no way currently to request a display state independently of a system state, however that is planned for the future as well. For example, a request may be made to keep the screen at a specified brightness.

The API is subject to change and has a few trouble spots, but is all available to look at in the code here. Taking a look at the testclient C code or tester.sh will best illustrate the usage, but remember this is not for apps, it is for other system services. The usage for an app will be to request from a system service via an API something like “playVideoWithScreenOn()”, and then the system service will translate that into a system state request.

Trying it Out

If you want to play with it on your phone, use gdbus to take an active state request and you can block system suspend. You will need to install libglib2.0-bin on your phone if not already installed.

# request active state from PID 99 (a made-up PID). This returns a cookie, which you need to later drop the request. The cookie here is “1”

phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path\
   /com/canonical/powerd --method com.canonical.powerd.requestSysState 1 99
[sudo] password for phablet: 
(uint32 1,)

show the outstanding requests.

phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path /com/canonical/powerd --method com.canonical.powerd.listSysRequests
([(':1.29', 99), ('internal', 36)],)

now we pass in the cookie we received earlier and clear our request

phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path /com/canonical/powerd --method com.canonical.powerd.clearSysState 1
()

recheck the list

phablet@localhost:~$ sudo gdbus call --system --dest com.canonical.powerd --object-path /com/canonical/powerd --method com.canonical.powerd.listSysRequests
([('internal', 36)],)

Logs

If you want to see everything that is going on, check the powerd log file. sudo tail -f /var/log/upstart/powerd.log. For now we have it logging in debug mode, so it will tell you everything.

But My Device Isn’t Suspending

Even though we request suspend, we may not get to suspend, because it appears that at least on some devices (Nexus4 and maybe others) Android’s sensor service is holding a wakelock. We are also working on this issue.

<6>[ 1249.183061] lm3530_backlight_off, on: 0
<6>[ 1249.185105] request_suspend_state: sleep (0->3) at 1249179158043 (2013-05-21 16:38:57.769127486 UTC)
<4>[ 1249.185441] [Touch D]touch disable
<4>[ 1250.217488] stop_drawing_early_suspend: timeout waiting for userspace to stop drawing
<3>[ 1250.244132] dtv_pipe is not configured yet
--> <6>[ 1250.248679] active wake lock sns_periodic_wakelock
<6>[ 1250.248710] PM: Syncing filesystems...
<6>[ 1250.329741] sync done.

Next Steps

We have a bunch of stuff left to do here, the first obvious one is that using a monotonically increasing int for the cookie is not a great plan, so we will switch that to something like a UUID. We also need to send out dbus signals when the system goes into suspend so that services can react. We need to clean-up some of the dbus code while we’re doing that. Finally we plan on implementing display state requests using a similar model to the power state requests. Throughout all of this we need to start integration with the rest of the system.

Read more
Matt Fischer

I was trying to explain how our team did workflow to a former colleague last week and I so I started thinking about all the different workflows I’ve dealt with in my career. This one is by far my favorite, although I know it’s not git which everyone loves, I’m curious what workflows other groups use with launchpad. Take a look at this one and let me know, can our team do anything better, can yours?

First a brief note about our team at Canonical. We work on “premium” customer-facing projects, typically on ARM based hardware. We are downstream from Ubuntu for the most part, and although we do send fixes upstream when it makes sense, often we make customizations to packages that cannot go upstream. I’ll use a real-world example for this workflow explanation, we have a platform where we want to remove the user list, help menu entry, and the logout menu enty from the session indicator, so we needed to modify indicator-session to do so.

The tl;dr version of our workflow is Decentralized with shared mainline, with parts of Decentralized with automatic gatekeeper added.

Setup a Shared Master (mainline)

Grab the source for indicator-session for the distroseries we’re based on, precise in this case. We usually grab it from launchpad or apt-get source if launchpad’s precise copy is out of date. This code gets pushed to lp:~project-team/project/indicator-session. This is now the master/mainline version. Everyone on the team has write access to this, provided they follow team rules.

Setting Up My Local Branch

I have a pbuilder already setup for our project usually, so my first step is to setup my local tree. I like to use a two level hierarchy here so that builds don’t “pollute” my main project area where I have dozens of different branches checked out. So I setup a subdirectory and checkout a copy to master.

cd ~/Projects/project-precise-amd64
mkdir indicator-session
cd indicator-session
bzr branch lp:~project-team/project/indicator-session master

Now I branch master, if this wasn’t a fresh checkout, I would bzr pull in master first.

bzr branch master remove-buttons

Make Changes

At this point we make fixes or whatever changes are needed. The package is built, changes are tested, and lintian is run (this one gets forgotten many times).

We have a few goals to meet for changes, we don’t always succeed, but here they are:

  1. No new lintian errors, if it’s a new package that we made, 0 is better.
  2. If the package has unit tests, add a new test case to cover what we just fixed/changed.
  3. Patches should have minimal DEP3 headers.
  4. Coding style should follow upstream.
  5. No new compiler warnings without explanation.
  6. Good changelog entries with bug numbers if applicable. Entries should list what files were modified. Distroseries set to UNRELEASED still (more on why later).

A note on lintian, Jenkins is capable of rejecting packages with lintian errors. We have this disabled because we need to fix the errors that crept in first when we didn’t follow this rule.

Push to a Remote Branch for Review

We code review everything we do, so the next step is to make the branch public for a review.

bzr commit -m "good message, usually we just use the changelog entry" --fixes lp:BUGNUM
bzr push lp:~project-team/project/indicator-session-remove-buttons

Setup a Code Review

Everything is reviewed and all reviews are sent to the team, though the onus is on the submitter to ping appropriate people if they don’t get a timely review. For code reviews, everyone is expected to provide a good explanation of what they’re doing and what testing was done.

We also have one of the “enhancements” here as we have a Jenkins instance (similar to this one) setup for some projects and Jenkins gets to “vote” on the review. Packages that fail to build or fail unit tests are marked as “Rejected” in the review by Jenkins.

Merge Back to Master

After the review is approved, the code submitter merges the code and commits it up to the mainline. I’m paranoid about master changing, although the push will fail if it did, so I always update it first.

We have to also fix the distroseries back. We do this on our team because it reduces the chance that someone will dput a package that is built from a local or non-master branch. If somone were to try and dput the changes file built from the remove-buttons branch, it would fail. We really want the archive to only have packages built from master, it’s more repeatable and easier to track changes.

cd ~/Projects/project-precise-amd64/indicator-session
cd master
bzr pull
bzr merge ../remove-buttons
dch -e (modify distroseries from UNRELEASED to precise)
debcommit -r
bzr push :parent

Jenkins Does dput

Our team is slowly moving into the world of Jenkins and build/test automation, so we have Jenkins watching the master branch for interesting projects and it will manage the dput for us. This also provides a final round of build testing before we dput.

Some teams have autolanding setup, that is when the review is approved, the Jenkins instance will do the merge. For now, we’ve kept a human in the loop.

Update the Bug

It is annoying to look at a bug 3 months after you fixed it and wonder what version it’s fixed in. Although the debian/changelog tracks this, we generally always add a bug comment saying when a bug was fixed. For the most part people usually just paste the relevant changelog entry into the bug and make sure it’s marked as Fix Committed.

Read more
Matt Fischer

Last year I worked on a project where I was playing around with system-wide default settings and locks and I thought I’d share a post based on some of my notes. Most all of what I will mention here is covered in depth by the dconf SysAdmin guide, so if you plan on using this, please read that guide as well. UPDATE: Gnome has moved all the dconf stuff into the Gnome SysAdmin guide, it’s a bit more scattered now, but there.

For most everyone, you have just one dconf database per user. It is a binary blob and it’s stored in ~/.config/dconf/user. Anytime you change a setting, this file gets updated. For system administrators who may want to set a company-wide default value, a new dconf database must be created.

Create a Profile

The first step in setting up other databases is to create a dconf profile file. By default you don’t need one since the system uses the default database, user.db, but to setup other databases you will. So create a file called /etc/dconf/profile/user and add the list of databases that you want. Note that this list is a hierarchy and that the user database should always be on top.

For this example, I will create a company database and a division database. The hierarchy implies that we will have company-wide settings, perhaps a wallpaper, settings on top that are specific to the division, perhaps the IP of a proxy server that’s geographically specific, and each user will have customized settings on top of that.

To create a profile, we’ll do the following:

mkdir -p /etc/dconf/profile

and edit /etc/dconf/profile/user, then add:

user-db:user
system-db:division
system-db:company

Keyfiles

(Note: I am doing this on a relatively clean precise install using a user that has not changed their wallpaper setting, that is important later)

Once you have created the profile hierarchy, you need to create keyfiles that set the values for each database. For this example, we will just set specific wallpaper files for each hierarchy. This is done with key files:

mkdir -p /etc/dconf/db/division.d/

and edit /etc/dconf/db/division.d/division.key, add the following:

[org/gnome/desktop/background]
picture-uri='file:///usr/share/backgrounds/Flocking_by_noombox.jpg'

Next we’ll create the company key file:

sudo mkdir -p /etc/dconf/db/company.d/

and edit /etc/dconf/db/company.d/company.key, add the following:

[org/gnome/desktop/background]
picture-uri='file:///usr/share/backgrounds/Murales_by_Jan_Bencini.jpg'

Finally, you need to run sudo dconf update so that dconf sees these changes.

After running dconf update, you will see two changes. The first and most obvious change is that the background is now a bunch of Flocking birds, not the Precise default. The second change is that you will see two new binary dconf database files in /etc/dconf/db, one called company and one called division. If you don’t see these changes then you did something wrong, go back and check the steps.

flocking

Since I have no default set the division’s default takes precedence

The current user and any new users will inherit the Division default wallpaper, Flocking. However, the user still may change the wallpaper to anything they want, and if they change it, that change will be set in the user database, which takes precedence. So this method gives us a soft-default, a default until otherwise modified. If you are trying this test on a user who has already modified the wallpaper, you will notice that it didn’t change due to this precedence.

If we want to force all users, new and existing, to get a specific wallpaper, we need to use a lock.

Locks

For this example, let’s assume that the IS department for our division really really likes the Flocking picture and doesn’t want anyone to change it. In order to force this, we need to set a lock. A lock is simple to make, it just specifies the name of the key that is locked. A locked key takes precedence over all other set keys.

Before doing this, I will use the wallpaper picker and select a new wallpaper, this will take precedence, until the lock is created. I picked Bloom for my test.

I like flowers more than birds.

I like flowers more than birds.

Now it’s time to make the lock, because the IS department really doesn’t like flowers, so we create the lock as follows.

sudo mkdir -p /etc/dconf/db/division.d/locks/

and then edit /etc/dconf/db/division.d/locks/division.lock (note file name doesn’t really matter) and add the following line:

/org/gnome/desktop/background/picture-uri

After saving the file, run sudo dconf update. Once doing so, I’m again looking at birds, even though I modified it in my user database to point to Bloom.

Lock file forces me to use the Division settings

Lock file forces me to use the Division settings

One interesting thing to note, any changes the user is making are still being set in their dconf user db, but the lock is overriding what is being seen from outside dconf. So if I change the wallpaper to London Eye in the wallpaper picker and then remove the lock by simply doing sudo rm division.lock && sudo dconf update, I immediately get the London Eye. So it’s important to keep this in mind, the user db is being written into, but the lock is in effect masking the user db value when the setting is read back.

London Eye wallpaper is shown after I remove the lock

London Eye wallpaper is shown after I remove the lock

Lock Hierarchy

Lock hierarchy is interesting, in that the lowermost lock takes precedence. What this means is that if we lock both the company and division wallpapers, we will see the company one. In the example below I set locks on the wallpaper key for both databases, and I end up seeing Murales, the company default.

Company setting takes precedence

Company setting takes precedence with both locked

 

Locks Without Keys

It is also possible to set a lock on a hierarchy without a corresponding default key. In this instance the system default is used and the user is unable to change the setting. For this example, I set a company lock but removed the company key. The resulting wallpaper is the system default.

System default wallpaper for Precise is seen

System default wallpaper for Precise is seen

What Value is Seen – A Quiz

If you’d like to test your knowledge of what key will take precedence when read from dconf, follow the quiz below, answers are at the bottom. For each scenario, see if you can figure out what wallpaper the user will see, assume the same database hierarchy as used in the example.

  1. User Wallpaper: unset, Division Wallpaper: Flock, Company Wallpaper: Murales
  2. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: Murales
  3. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: Murales, Lock file for Company Wallpaper setting
  4. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: Murales, Lock file for Division and Company Wallpaper setting
  5. User Wallpaper: London Eye, Division Wallpaper: Flock, Company Wallpaper: unset, Lock file for Division and Company Wallpaper setting

Answers: Flock, London Eye, Murales, Murales, Default for Precise

Testing

Some notes about testing this if you are trying it:

    • Creating new users and logging in as them is a good way to see what settings are shown, the wallpaper is a great visual test as it’s easy to verify.
    • Do not do this on your development box. I screwed up my settings right before I was going to give a demo. I’d recommend a VM. If you do screw something up, check .xsession-errors, that’s where my problem was apparent.

Summary

If you’re a system administrator or you really like pictures of birds, dconf keyfiles and locks are the correct mechanism to make settings that are defaults, soft or hard. Hopefully this has been illustrative on how they work. I’d recommend playing with them in a VM and once you understand the hierarchies and locking, they should be pretty easy to use.

Read more
Matt Fischer

EDIT: As several people have pointed out, there is a script to already to this, pull-lp-source. Perfect! I’ve asked numerous people over the last year about whether there was a tool that could do this and nobody mentioned this one (even asked on AskUbuntu last week). So out of all this I end up with a link to a great new tool and got to write some python yesterday. pull-lp-source looks like it will meet all my needs.

During my work on bug triage and trying to become MOTU, I’ve found myself wanting to be able to pull source packages for a specified release, for example, download source for lxc on precise, even if I’m using raring. Although you can do this if you setup apt with all the releases and then use pinning, or doing a setup like this, I wanted an easier way. So I decided to glue together rmadison and dpkg-source and create a tool called “get_source”. This is how it works.

get_source.py -r <release> -p <package>

Pulling the source for bc on oneiric:

get_source.py -r oneiric -p bc

Grabbing lxc on precise:

get_source.py -r precise -p lxc

Seems pretty simple and it is!

The tool relies on outside helpers to do the hard work, namely rmadison and dpkg-source, so you’ll need those installed to use it. Please give it a try and send in feedback and fixes. If you’re a developer you’ll note that I even have unit-ish tests, please add more if you make some fixes for corner-cases.

bzr branch lp:~mfisch/+junk/get_source

How It Works

  1. Run rmadison and build a list of packages + versions per release
  2. Find the release we care about. We now know the package name, version, and release name.
  3. Using some hueristics, download the dsc file.
  4. Read and parse the dsc file to find the filenames for the orig file and diff and/or debdiff
  5. Download the orig and diff/debdiff files
  6. Use dpkg-source -x to extract it

Alternatives and Issues

When I started this, I figured it would be simple, but I was mistaken. There is lots of variation on filenames and locations in the archives, for example:

  1. I had originally planned to just go grab http://url/pool/main/<package first letter>/package/package_version.<extension>, but it’s not quite that simple. First, not all packages use standard names, some have a diff.gz, some a debian.tar.gz. Then some packages use xz and some use gz.  Native packages won’t have a diff at all (I think), and right now I know my code won’t support that.
  2. There’s also the question of package directory. alsa-base for example comes from the directory “alsa-driver”. I plan on grabbing this information from apt-cache show, but even that will not solve the issue if I’m on raring and the package was elsewhere in precise. This is also not yet supported in this version.
  3. Packages like angband have a version of 1:3.0.9-1, and the “1:” portion is not included in the filename. The code now supports this.

I found these cases by making this app work for a package and then randomly trying more and more packages to find and hopefully fix new cases. The worry I have is that there are hundreds more corner-cases that I don’t handle. Given all these issues, I’m still releasing this code for other people to test, but perhaps someone has simpler solutions to the problems above? Even better, maybe someone has already written a better tool, which I’ll gladly use!

Read more
Matt Fischer

Limiting LXC Memory Usage

I’ve been playing around with LXC over the past few weeks and one of the things I tried out was limiting the memory that the container is allowed to use. I didn’t plan on explaining all the ins-and-outs of LXC here, but a short description is that LXC provides a virtualizedish environment that is more than a chroot gives you, but less than a full-blown virtual machine. If you want more details, please check out stgraber’s blog post about LXC in 12.04.

Kernel Configuration

The first thing you need to do in order to limit memory usage for LXC is make sure your kernel is properly configured, you need the following flag enabled:

CONFIG_CGROUP_MEM_RES_CTLR=y

If you plan on also limiting swap space usage, you’ll also need:

CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y

These flags are enabled for me in my 12.10 kernel (3.5.0-22) and so presumably you’ll have them in 12.04.

Setting the Cap

First, I’m going to create my container. Following the instructions from stgraber’s blog post, and calling the container “memlimit”:

sudo lxc-create -t ubuntu -n memlimit

Once the container is built, we need to modify the config files. Look at /var/lib/lxc/memlimit/config. We need to add a few lines to that file. I’m going to limit memory to 512M and total usage or memory + swap to 1G. Note the second setting is for overall memory + swap, not just swap usage.

lxc.cgroup.memory.limit_in_bytes = 512M
lxc.cgroup.memory.memsw.limit_in_bytes = 1G

Now let’s start the container and get some debug info out of it to make sure these were set:

sudo lxc-start -n memlimit -l debug -o debug.out

The debug.out file will show up wherever you ran lxc-start from, so let’s see if it picked up our limits:

lxc-start 1359136997.617 DEBUG lxc_conf - cgroup 'memory.limit_in_bytes' set to '512M'
...
lxc-start 1359136997.617 DEBUG lxc_conf - cgroup 'memory.memsw.limit_in_bytes' set to '1G'

Looks good to me!

Note, I tried setting this once to 1.5G and it seems that the fields are only happy with whole numbers, it complained about 1.5G. That error message appeared in the same debug log I used above.

A list of more of the values you can set in here is available here.

Measuring Memory Usage

The view of /proc/meminfo inside the container and outside the container are the same. This means that you cannot rely on tools like top to show how much memory the container is using. In other words, when run inside the container top will correctly only show processes inside the container, it will also show overall memory usage for the entire system. To get valid information, instead we need to examine some files in /sys:

Current memory usage:
/sys/fs/cgroup/memory/lxc/memlimit/memory.usage_in_bytes

Current memory + swap usage:
/sys/fs/cgroup/memory/lxc/memlimit/memory.memsw.usage_in_bytes

Maximum memory usage:
/sys/fs/cgroup/memory/lxc/memlimit/memory.max_usage_in_bytes

Maximum memory + swap usage:
/sys/fs/cgroup/memory/lxc/memlimit/memory.memsw.max_usage_in_bytes

You can use expr to show it as kb or mb which is easier to read for me:

expr `cat memory.max_usage_in_bytes` / 1024
8188

What Happens When the Limit is Reached?

When the cap is reached, the container simply behaves as if the system ran out of memory. Calls to malloc will start failing (returning -1), leading to strange and bad things happening. Dialog boxes may not open, you may not be able to save files, and more than likely where people didn’t bother to check the returned value from malloc (aka, everyone), you’ll get segfaults. You can alleviate the pressure like normal systems do, by enabling swap inside the container, but once that limit is reached, you’ll have the same problem. In this case the host system’s kernel should start firing up the OOM killer and killing stuff inside the container.

Here is my extremely simple test program to drive up memory usage, install gcc in your container and you can try it too:

#include 
#include 

int main(void) {
    int i;
    for (i=0; i&lt;65536; i++) {
        char *q = malloc(65536);
        printf ("Malloced: %ld\n", 65536*i);
    }
    sleep(9999999);
}

You can simply compiled with: gcc -o foo foo.c

I used the following simple shell construct to watch the memory usage. This needs to be run outside the container and I ran it from the /sys directory mentioned above:

while true; do echo -n "Mem Usage (mb): " \&amp;\&amp; expr `cat memory.usage_in_bytes` / 1024 / 1024; echo -n "Mem+swap Usage (mb): " \&amp;\&amp; expr `cat memory.memsw.usage_in_bytes` / 1024 / 1024; sleep 1; done

With the above shell script runnint, I fired up a bunch of copies of foo one bye one. Here’s the memory usage from that script:

Running a few copies:

Mem+swap Usage (mb): 825
Mem Usage (mb): 511
Mem+swap Usage (mb): 859
Mem Usage (mb): 511

A new copy of foo is starting:

Mem+swap Usage (mb): 899
Mem Usage (mb): 511
Mem+swap Usage (mb): 932
Mem Usage (mb): 511
Mem+swap Usage (mb): 1010
Mem Usage (mb): 511

The OOM killer just said “Nope!”

Mem+swap Usage (mb): 814
Mem Usage (mb): 511
Mem+swap Usage (mb): 825
Mem Usage (mb): 511

At the point where the OOM killer fired up, I see this in my container:
[1] Killed ./foo

So the limits are set, and they’re working.

Conclusion

If you are using LXC or considering using LXC, you can use a memory limit to protect the host from a container run amok. You could also use it to test your code in an artificially restricted environment. In either case, try the tools above and let me know how it works for you.

Read more
Matt Fischer

Last week I was running some cairo perf traces on the Nexus7. Cairo-perf traces are a great way to measure 2d graphics performance and to use those numbers to measure the effects of code, hardware, or driver changes. One other cool thing is that with this tool you can do a benchmark on something like Chromium or Firefox without even needing the application installed.

The purpose of this post is to briefly explain how to build the traces, how to run the tools on Ubuntu, and finally a quick look at some results on the Nexus7.

Before running the tools you need to get setup and build the traces. A full clone and build will use several gigs of disk space. Since the current N7 image only builds a 6G or so filesystem, you may want to build the traces in a pbuilder. The disk I/O on the N7 is also pretty slow, so I found that building in the pbuilder, even though it runs inside a qemu, is much faster on my Core i5 + SSD.

In the steps below I’ve tried to call out the things you can do to reduce the disk space.

Building the traces

1. Setup the build environment

sudo apt­-get install libcairo2-­dev lzma git

2. Grab the traces from git

git clone git://anongit.freedesktop.org/cairo­-traces

3. (Optional) Remove unused files to save on disk space. Don’t do this if you plan on submitting changes back upstream.

cd cairo-­traces
rm -­rf .git

4. Build the benchmarks, I used -j4 on my laptop and -j2 on the Nexus7. I didn’t really investigate the optimal value.

make -j4 benchmarks

5. The benchmark directory is now ready to use for traces. If you built it on a different system, you only need to copy over this directory. You can delete the lzma files if you want.

The traces you are pixman version specific, so if you have a Raring based system like the Nexus7, you can’t re-use them on a Precise based box.

Running cairo-perf-trace

1, Before you start, delete the ocitysmap trace from the benchmarks folder. It uses too much RAM and ended up locking up my N7.

2. If you are at the command line, connected via ssh for example, you need to set the display or it will segfault, simply run export DISPLAY=:0

3. Run the tool, I’d start first with a simple trace to make sure that everything is working.

CAIRO_TEST_TARGET=image cairo-­perf-­trace ­-i3 -­r ./benchmark/gvim.trace > ~/result_image.txt

In that command above we did a few things, first we set the cairo backend. Image is a software renderer, you probably want to use xlib or xcb to test hardware. If you don’t set the CAIRO_TEST_TARGET it will try all the back-ends, this will take a long long time and I don’t recommend doing it. A simple way to get the tool to list them all is to set it to a bad value, for example

mfisch@caprica:~$ CAIRO_TEST_TARGET=mfisch cairo-perf-trace
Cannot find target 'mfisch'.
Known targets: script, xcb, xcb-window, xcb-window&, xcb-render-0.0, xcb-fallback, xlib, xlib-window, xlib-render-0_0, xlib-fallback, image, image16, recording

The next argument, -i3 tells it to run 3 iterations, this gives us a good set of data to work with. -r asks for raw output, which is literally just the amount of time the trace took to run. Finally ./benchmark/gvim.trace shows which trace to run. You can pass in a directory here and it will run them all, but I’d recommend trying that just one until you know that it’s working. When you’re running a long set of traces doing a tail -f on the result file can help assure you that it’s working without placing too heavy of a load on the system. The hardware backend runs took almost all day to finish, so you should always be plugged into a power source when doing this.

The output should look something like this:
[ # ] backend.content test-size ticks-per-ms time(ticks) ...
[*] xlib.rgba chromium-tabs.0 1e+06 1962036000 1948712000 1938894000

Making Pretty Graphs

Once you have some traces you can make charts with cairo-perf-chart. This undocumented tool has several options which I determined by reading the code. I did send a patch to add a usage() statement to this tool, but nobody has accepted it yet. First, the basic usage, then the options:

cairo-perf-chart nexus7_fbdev_xlib.txt nexus7_tegra3_xlib.txt

cairo-perf-chart will build two charts with that command, one will be an absolute chart, on that chart, larger bars indicate worse performance. The second chart, the relative chart takes the first argument as the baseline and compares the rest of the results files against it. On the relative chart, a number below the zero line indicates that the results are slower than the baseline (which is the first argument to cairo-perf-chart.

Now a quick note about the useful arguments. cairo-perf-chart can take as many results files as you want to supply it when building graphs, if you’d like to compare more than two files. If you want to resize the chart, just pass –width= and –height=, defaults are 640×480. Another useful option is –html which generates an HTML comparison chart from the data. The only issue with this option is that you manually need to make a table header and stick it in to a basic HTML document.

Some Interesting Results

Now some results from the Nexus7 and they are actually pretty interesting. I compared the system with and without the tegra3 drivers enabled. Actually I just plain uninstalled the tegra3 drivers to get some numbers with fbdev. My first run used the image backend, pure software rendering. As expected the numbers are almost identical, since the software rendering is just using the same CPU+NEON.

Absolute Results - Tegra3 vs fbdev drivers, image (software) backend

Absolute Results – Tegra3 vs fbdev drivers, image (software) backend

Relative Results - Tegra3 vs fbdev drivers, image (software) backend

Relative Results – Tegra3 vs fbdev drivers, image (software) backend

The second set of results are more interesting. I switched to the xlib backend so we would get hardware rendering. With the tegra3 driver enabled we should expect a massive performance gain, right?

Absolute Results - Tegra3 vs fbdev drivers, xlib backend

Absolute Results – Tegra3 vs fbdev drivers, xlib backend

Relative Results - Tegra3 vs fbdev drivers, xlib backend

Relative Results – Tegra3 vs fbdev drivers, xlib backend

So as it turns out the tegra3 is actually way slower than fbdev and I don’t know why. I think that this could be for a variety of reasons, such as unoptimized 2d driver code or hardware (CPU+NEON vs Tegra3 GPU).

Now that we have a method for gathering data, perhaps we can solve that mystery?

If you want to know more about the benchmarks or see some more analysis, you should read this great post which is where I found out most of the info on running the tools. If you want to know more background about the cairo-perf trace tools you might want to read this excellent blog post.

Read more
Matt Fischer

For my birthday in October, I received a Fitbit One. The reason that I wanted it is that I thought with better data tracking I could push myself to be more active during the day. The Fitbit One is a “fitness tracker”, essentially a technologically enhanced pedometer, that can also measure elevation gain (steps climbed they call it), and even track your sleep patterns. The device, which is slightly larger than a large paperclip, syncs wirelessly  to iPhones or to a computer. It uploads all your statistics to Fitbit.com, which provides a cool dashboard which you can use to track your steps, floors climbed, calories burned, etc. Here’s my dashboard from yesterday:

My Fitbit dashboard from yesterday

Like most geeks, I love data, and nice charts and graphs too, so I’ve really enjoyed the dashboard. I’ve also found that the maxim, “What gets measured gets done” really applies here. Two nights ago at 11:30PM, I noticed I was 300 steps short of 10000 steps, so I made sure to walk around while brushing my teeth, took the trash out, and generally wandered until I got past 10000 steps. That was only 300 steps, but I’ve also found myself walking the dog more, walking to the library more, etc.

So what does this have to do with Ubuntu? Well you can see at the bottom of that dashboard that Fitbit gives “badges”, which Chris Wayne thought would be a perfect fit for the Ubuntu Accomplishments system.  So Chris hacked all weekend and created an online account plugin for Fitbit. On Monday we hooked the oauth account created by Chris’s plugin into Fitbit’s web API and now we had Fitbit accomplishments!

Badges I can earn

My Trophies

You need a Fitbit to use it, and if you buy one, use this link so that Chris and I can support our daily beer and daily steps habits. The same link is also in the collection itself.

Installing

Note: That this requires Quantal or Raring because it uses Online Accounts. The raring build broke for some reason earlier but it should be ready an hour from the time this posts.

Installing is easy, although if you don’t already have Ubuntu Accomplishments installed it’s a two step process.

First, install Ubuntu Accomplishments if you’ve not already done so:

sudo add-apt-repository ppa:ubuntu-accomplishments/releases
sudo apt-get update
sudo apt-get install accomplishments-daemon accomplishments-viewer ubuntu-community-accomplishments ubuntu-desktop-accomplishments accomplishments-lens

Then install the Fitbit Accomplishments collection:

sudo add-apt-repository ppa:fitbit-accomplishment-maintainers/daily
sudo apt-get update
sudo apt-get install account-plugin-fitbit ubuntu-fitbit-accomplishments

If you’re already running Ubuntu Accomplishments, you’ll need to close the viewer and restart the Accomplishments Daemon to get the new collection to show up.  You can restart the daemon by doing accomplishments-daemon –restart.  A simple logout/login will also work.

The first accomplishment you need to get is connecting to your Fitbit account. Chris also wrote a post with some screenshots if you get stuck.

You need to setup your Fitbit Online Account before you can get any Fitbit badges, follow the steps in the Accomplishment to do so.

Follow the directions in the accomplishment to set this up. Once you do that, the other fitbit accomplishments will unlock in a logical progression as you achieve things (for example, the 10000 steps in a day accomplishment requires you to complete the 5000 steps in a day accomplishment first).

Note that Fitbit admits that the Badge API is still new and there are some quirks, for example, Fitbit provides badges for 50 and 250 lifetime kilometers, but for lifetime miles, they offer 50, 250, 1000, and 5000. Also some badges are transparent, some are not, which I know we could fix, but I haven’t had time yet. As this API improved and is expanded, we’ll add more accomplishments, or better yet, you can add more by sending us a merge proposal (the code is here).

Why?

Fitbit accomplishments, like walking 10000 steps in a day, obviously have nothing to do with Ubuntu, but this collection highlights the flexibility of the Ubuntu Accomplishments system. Anything that can be tested via script can be an accomplishment. I’m sure there are lots of other websites that people use that could be added as collections like this one. If you’re interested and you need help setting one up, you can find me (mfisch) in #ubuntu-accomplishments on Freenode.

About the Accomplishments Code

The code for checking these accomplishments in the accomplishments scripts is very very simple:

    badgeid = "10000 DAILY_STEPS"
    me = FitBit.fetch(None)
    if badgeid in me.badges:
        sys.exit(0)
    else:
        sys.exit(1)

This is because all the hard logic is in helpers.py, which provides the FitBit class and handles caching for us. Since the way accomplishments work is that each accomplishment has a script associated with it, we want to cache the info so that we don’t hammer the Fitbit web API once per script every 15 minutes (all unlocked accomplishments are checked every 15 minutes). The caching solution in helpers.py, was copied from the model used by AskUbuntu and Launchpad in the Ubuntu Community Accomplishments package. helpers.py also is how we interact with the Online Accounts plugin and Fitbit’s web API, so if you want to see the “interesting code”, look there.

Note: Expect a follow-up blog post from Chris Wayne on how to write an online accounts plugin in the next couple of weeks.

Help Needed

If you live outside of the US and you have a Fitbit and are willing to help, I need some assistance to see what happens if the Fitbit API returns localized Badge info. I also need to see if what it looks like when you get a badge marked in Kilometers. I don’t think I get these because of where I live (the US). Drop me an email to matt@<this_domain>.com if you can assist or find me in #ubuntu-accomplishments on freenode, I’m mfisch. I think I’ll only need a few minutes of your time.

Read more
Matt Fischer

Based on the questions I’ve seen on AskUbuntu, lightdm.conf is one of the most misunderstood files on your system. So, I decided I’d write a post on how you can easily modify this file and what the modification are useful for. I hope to show how to modify the most asked about settings.

Safely Modifying lightdm.conf

Before you do anything to your lightdm.conf file, you should make a backup, simply run:

sudo cp /etc/lightdm/lightdm.conf /etc/lightdm/lightdm.conf.old

Once you’ve made a backup, the simplest and safest way to modify lightdm.conf is to use lightdm-set-defaults. lightdm-set-defaults was written so that lightdm.conf could be modified via script, but you can also use it to easily make changes. I’ve made several changes to this tool to add new features that I needed, and best of all, I even wrote a manpage for it, which should show up in raring at some point. If you’re not using raring, then just run /usr/lib/lightdm/lightdm-set-defaults with no arguments and you’ll get a clear picture on what it can do.

Usage:
lightdm-set-defaults [OPTION...] - set lightdm default values

Help Options:
-h, --help Show help options

Application Options:
-d, --debug Enable debugging
-k, --keep-old Only update if no default already set
-r, --remove Remove default value if it's the current one
-s, --session Set default session
-g, --greeter Set default greeter
-a, --autologin Set autologin user
-i, --hide-users Set greeter-hide-users to true or false
-m, --show-manual-login Set show-manual-login to true or false
-R, --show-remote-login Set show-remote-login to true or false
-l, --allow-guest Set allow-guest to true or false

You can also edit the file manually, but in either case, manual edit or set-defaults, you’ll need to use sudo. And now that you know how to modify the file, let’s cover what the most frequently asked about items are.

Disabling Guest Login

Some people really get annoyed by guest login, so if you want to disable it, simply use:

sudo /usr/lib/lightdm/lightdm-set-defaults --allow-guest false

Or, you can manually add the following line in the [SeatDefaults] section:

allow-guest=false

The default for this option is true, so if unset, the guest account will be enabled.  Note: See how great the command option for lightdm-set-defaults was named? Whoever added that was a genius.

Hiding the User List

If you don’t want a user list to be displayed by the greeter, you can enable this option. This should also be used with the enabling manual login (below) or logging in may be a challenge (actually I’ve never tried one without the other, I’m not sure what will happen).

sudo /usr/lib/lightdm/lightdm-set-defaults --hide-users true

Or, you can manually add the following line in the [SeatDefaults] section:

greeter-hide-users=true

The default for this option is false, so if unset, you will get a user list in the greeter.

Show Manual Login Box

If you previously hid your user list and would like a box where you can manually type in a user name then this option is for you.

sudo /usr/lib/lightdm/lightdm-set-defaults --show-manual-login true

Or, you can manually add the following line in the [SeatDefaults] section:

greeter-show-manual-login=true

The default for this option is false, so if unset, you won’t get a manual login box.

Autologin

You can enable autologin by specifying the autologin user.

sudo /usr/lib/lightdm/lightdm-set-defaults --autologin username

Or, you can manually add the following line in the [SeatDefaults] section:

autologin-user=username

There are other autologin related options which you may want to set, but none of these can be set using lightdm-set-defaults:

To change how long the greeter will delay before starting autologin.  If not set, the delay is 0, so if you want this to be 0, you don’t need to change it.  Note: the default for all unset integers in the [SeatDefaults] section is 0.

autologin-user-timeout=delay

To enable autologin of the guest account:

autologin-guest=true

Run a Command When X Starts, When The Greeter Starts, When the User Session Starts

When lightdm starts X you can run a command or script, like xset perhaps.

display-setup-script=[script|command]

You can do something similar when the greeter starts:

greeter-setup-script=[script|command]

or when the user session starts:

session-setup-script=[script|command]

Change the Default Session

If you want a different session for the default, you can modify this option. I think that the greeter will default to give you the last session you chose, so this option will only change the default session. Note: The session switcher will only show up if you have more than one VALID session; a valid session is one that points to a valid executable. By default in 12.10 you will have a session file for gnome-shell, but gnome-shell won’t be installed, so the session is invalid, leaving you with a single valid session (Ubuntu), and hence no session selector!

/usr/lib/lightdm/lightdm-set-defaults --session [session name]

Or, you can manually add the following line in the [SeatDefaults] section:

user-session=true

The list of user sessions is in /usr/share/xsessions, although even that location is configurable (see Advanced Options).

You can change the default greeter in the same manner, using –greeter for lightdm-set-defaults or greeter-session for the config file. The list of installed greeters is in /usr/share/xgreeters.

Advanced Options and All Other Options

There is no manpage for lighdm.conf, but there is an example that lists all the options and a bit about what they do, just look in /usr/share/doc/lightdm/lightdm.conf.gz.  If you use vim, you can just edit the file and it will be automagically ungzipped for you, users of other editors are on their own.

Read more
Matt Fischer

Several months ago, I wrote a stock quote lens using Michael Hall‘s Singlet. After installing Quantal in September, I started playing with the preview feature in lenses and I really liked it, so I got motivated to add it to my lens. Turns out, it’s super easy. During the process, I also added real charts for the preview icons and fixed a bug when displaying news stories (they were missing the publication date).

First a quick look at the new graphs and previews, then I’ll go over the code.

Real charts with the quotes

The preview mode offers two buttons for stock quotes, the first one “More Info” takes you to the standard stock quote page. The second one, Interactive Chart, takes you to a large interactive stock chart, both at Yahoo Finance.

Preview with two actions

 

If you want to install the new version with quotes, you need version 0.6 or later. You can get them from the Scopes Packagers PPA, I’ve uploaded versions for Precise, Quantal, and Raring.

Now, let’s look at the code, you can find it here, or just branch it with: bzr branch lp:~mfisch/onehundredscopes/unity-stock-ticker-lens.  The relevant changes are in revision 11, look at the preview function in the scope code (yahoostock-scope). I’ve simplified what’s in my code some to make it easier to follow:

def preview(self, result_item, result_model):
   preview = Unity.GenericPreview.new(result_item['title'], result_item['description'], None)
   preview.props.image_source_uri = 'http://chart.finance.yahoo.com/t?s=%s&amp;lang=en-US&amp;region=US&amp;width=380&amp;height=380' % result_item['title']
   open_chart = Unity.PreviewAction.new("open_chart", "Interactive Chart", None)
   open_chart.connect('activated', self.open_chart)
   preview.add_action(open_chart)
   return preview

The first two lines setup the preview, the first one sets the large font title and the smaller font text. The second line sets the large preview image. The open_chart lines define an action, which becomes a button in the UI. The button text is “Interactive Chart” and the button action is a function called open_chart. The open_chart function (not listed, but available in the bzr branch munges the inbound URL some and then opens a webbrowser to the Yahoo interactive chart page.

And that’s it!  Very simple! I had this up and running in about 30 minutes, while watching The Pacific on TV,

You can read more about previews in Singlet 0.3 in Michael Hall’s blog post about it. Special thanks to Chris Wayne for his git hub lens which was inspiration for these changes. You can look at Chris’s code for some other examples, his may be easier to follow than mine.

Read more
Matt Fischer

Getting Juju With It

At the UDS in Copenhagen I finally had time to attend a session on Juju Charms. I knew the theory of Juju, which is that allows you to easily deploy and link services on public clouds, locally, or even on bare metal, but I never had time to try it out. The Charm School (registration required) session in Copenhagen really showed me the power of what Juju can give you. For example, when I first setup my blog, I had to find a webhost, get an an ssh account, download WordPress, install it, and dependencies, setup mysql, configure WordPress, debug why they weren’t communicating, etc. It was super annoying and took way too long. Now, imagine you want to setup ten blogs, or ten instances of couchdb, or one hundred, or one thousand, and it quickly becomes untenable.  With juju, setting up a blog is as simple as:

  • juju deploy wordpress
  • juju deploy mysql
  • juju add-relation wordpress mysql
  • juju expose wordpress

A few minutes later, and I have a functioning WordPress install. For more complex setups and installs Juju helps to manage the relationships between charms and sends events that the charms react to. This makes it easy to add and remove services like haproxy and memcached to an existing webapp. This interaction between charms implies that the more charms that are available the more useful they all become; the network effect applies to charms!

So after I got home, Charm School had left me energized and ready to write a charm, but I didn’t have any great ideas, until I remembered an app that I’ve used before called Tracks. Tracks is a GTD app, in other words, a fancy todo list. I’d used it hosted before, but my free host went offline and I lost all my to do items. Hosting my own would be much safer. So I started working on a Tracks charm.

If you need an idea for a charm, think about what tools you use that you have to setup, what software have you installed and configured recently? If you need an idea and nothing stands out, you can check out the list of “Charm Needed” bugs. Actually you should check that list regardless to make sure nobody else is already writing the same one.

With an idea in hand, I sat down to write my Charm. Step one is the documentation, most of which was contained on this page “Writing a Charm“. I fully expected to spend three weeks learning a new programming language with arcane black magic commands, but I was pleasantly surprised to learn that you can write a charm in any language you want. Most charms seem to be shell scripts or Python and my charm was simple enough that I wrote it in bash.

During the process of charm writing you may have some questions, and there’s plenty of help to be had. First, the examples that are contained in the juju trunk are OLD and I wouldn’t recommend you follow them. They are missing things like README files and don’t expose http interfaces, which was requested for my charm. Instead I’d recommend you pull the wordpress, mysql, and drupal charms from the charm store. If the examples aren’t enough, you can always ask in #juju on freenode or use askubuntu.com. Once your charm works, you can submit it for review. You’ll probably learn a lot during the review, every person I’ve talked to has.

Finally after a bit of work off and on, my charm was done! I submitted it for review, made a few fixes and it made it into the store.

I can now have a Tracks instance up and running in just a few minutes

I’ve barely scratched the surface here with this post, but I hope someone will be energized to go investigate charms and write one. Charms do not use black magic and you don’t need to learn a new language to write one. Help is available if you need it and we’d love to have your contributions.
If you go write a charm please comment here and let me know!

Read more
Matt Fischer

This is a a brief post to alert everyone that the 32Gb Nexus7 with 3g cannot currently install Ubuntu.  The problems have been fixed, but we’re not going to re-roll a Quantal based image for this. You’ll get a fix when the Raring builds are ready, which should be soon, hopefully this week. The issue, if you’re curious, is that the new radio changed the device ID where we write the rootfs into. The new code will not make assumptions about the layout and will determine it dynamically.

Read more
Matt Fischer

I was asked last Friday about what type of bugs we see the most on Nexus7.  Right now we have about 75 unfixed bugs, and from having walked through that bugs list about twenty times, I know that we can break the bugs down into some categories. These are arbitrary categories and subject to debate, but they show the patterns I see in the bug list:

Kernel/Kernel Config/Drivers: 9 bugs

The initial kernel we used was the Android 3.1 kernel, which included binary drivers. This kernel has configuration and code differences from the standard Ubuntu kernel. The kernel team at Canonical is working on trying to get the kernel we’re using as close to Ubuntu as possible. This includes things as simple as enabling more modules and as complex as merging in patches so we can support things like overlayfs.

Onboard Related Issues: 9 bugs

For mobility impaired users, Onboard is a part of daily life. For most of us, we only have to use it when we’re playing with a tablet device, so I think it’s great that these bugs are getting some attention. One bug that was recently fixed by marmuta that should lead to Onboard launching 7x faster with certain themes, including the default theme. Some of the bugs in this category are not issues with Onboard itself, but impact Onboard specifically.

Unity/Nux: 6 bugs

Unity and nux have some bugs that impair the usability of the device and a couple bugs that lead to crashes or lock-ups. This is the area I know the least about. Many of the bugs here are sitting in the upstream project as “New”, if you can help confirm them upstream or even find an older dupe, please do.

Tegra3/nVidia: 6 bugs

We found several bugs that seem to be Tegra3 related, Tegra drivers related, or bugs that we need some input from nVidia on, for example, the sound only works after suspend/resume issue.  Not all of these may really be tegra related issues in the end, many require more investigation.

—–

So these are my top four categories, but that still leaves over half the bugs out. There are some smaller categories, which I’d list as Touch, Performance, and Bluetooth, and then there is Misc aka Everything Else.

Categories aside, another interesting thing I’ve noticed that aside from bugs that are specific to the kernel, drivers, or chipset, almost all the bugs we’ve found were confirmed on other platforms; usually they’re confirmed on a someone’s amd64/i386 laptop. Finding, bringing attention to, and fixing these bugs means shows that we’re achieving one of our goals, which is to fix issues in Ubuntu Core. These fixes will benefit all platforms. There are only a small number of bugs that we have not been able to confirm on other platforms yet, can any of my readers do so?  Here are the ones that stand-out in my mind:

Also I’d like to thank all the new contributors we’ve had in the past couple of weeks, we’re glad for all your help on bugs in any form you can provide it.

Below are the full bug lists for my generated categories:

Kernel:

  • 1068672 webcam support
  • 1072320 please consider adding OTG charging support to kernel
  • 1075549 please include fw_bcmdhd.bin and bcm4330.hcd in linux-firmware for support of the nexus7
  • 1076317 overlayfs support
  • 1070770 bluetoothd dies with glibc malloc memory corruption when used with brcm_patchram
  • 1071259 Setting brightness all the way down actually switches off the display completely
  • 1073499 please consider turning on all possible modules for external USB devices
  • 1073840 Sync kernel configuration with the one from the Ubuntu kernel
  • 1074673 JACK server fails to start

OnBoard:

  • 960537 Dash search box doesn’t unhide Onboard on-screen keyboard
  • 1078554 Onboard doesn’t respect launcher icon size
  • 1081227 onboard should optionally stay hidden if a keyboard is present
  • 1075326 On screen keyboard doesn’t re-position in order to see input
  • 421660 gksu’s and gksudo’s modal password prompt prevents OnBoard’s virtual keyboard input, causing accessibility issues
  • 1079591 onboard can be made thin to the point of unusable
  • 1071508 Onboard onscreen keyboard isn’t always shown when text input selected
  • 1077260 When using software center search, onboard goes away until text box is reslected after entering 2 chars
  • 1077277 Keyboard can’t type into Firefox bookmark dialog

Unity/Nux:

  • 1065638 Unity panels don’t display visuals
  • 1072249 Using desktop switcher via touchscreen causes Unity launcher to stop working
  • 1045256 Dash – It should be possible to vertically scroll the Dash left clicking and dragging
  • 1055949 Unity panel shadow appears as solid black bar on GLES/ARM (Pandaboard, Nexus 7)
  • 1075417 Unity panel/launcher width don’t scale with system DPI/font settings
  • 1070374 unity cannot be cleanly restarted from the command-line on Nexus7

Tegra3/nVidia:

  • 1065644 plymouth causes a hard reset of the nexus
  • 1068804 sound only works after suspend/resume cycle
  • 1070283 after reboot, framebuffer of previous boot appears on screen
  • 1073096 Screen is corrupted between rotations
  • 1067954 control-alt-f1 to bring up a VT shows a blank (black) screen
  • 1070755 screen rotates to portrait sometimes

PS – Thanks to Chris Wayne for vetting my bug category list.

Read more
Matt Fischer

There are dozens of ways that the rest of the community can participate in the Nexus7 project, but I’d like to call out one facet that I’ve been working hard on since I got back from Copenhagen. Chris Wayne and I spend a large portion of our day going through the Nexus7 bug list doing bug triage. In brief, what we do is:

  1. If the bug is unclear, ask the submitter for more info in a comment
  2. Checking for duplicate bug reports inside the Nexus7 project
  3. Confirm that the bug really happens on the Nexus7 device
  4. See if the bug occurs on our laptops/dev boxes (x86 running Quantal)
  5. Search the upstream launchpad projects to see if the bug is known already, and mark the Nexus7 one as a duplicate if so
  6. Search gnome bug reports to see if we can link to one
  7. File upstream LP and gnome bug reports if none exist
  8. Testing fixes from upstream
  9. Set bug priority
All of these are standard Ubuntu Bug Triage tasks that anyone from the community can do. Better yet, all of these except for #3 and maybe #8 can be done without even owning a Nexus7.

So today, I’m officially asking the community to help, if you’re interested in helping with the Nexus7 work and don’t know where to start, bug triage is a great place and we could use your help.

So how do you start?  A good place to start is by reviewing the New bug queue and then following the Triaging guide and try to get enough information so that a developer can get started on it. You should also join the #ubuntu-arm channel on freenode, where we can discuss bug status and priority. Feel free to ping me (mfisch) or Chris (cwayne) if you have questions about a bug or how to help.

You may also consider joining the Ubuntu Bug Squad, and later Ubuntu Bug Control if you want to keep doing this work generally in Ubuntu.

On a personal note: when I first started working on Ubuntu, I joined the Ubuntu Bug Squad because there’s really no better way to learn about the components of Ubuntu than to dive into a bunch of bug reports. After a while doing Bug triage work, I was very comfortable with launchpad and bug triage procedures. I also had a better feel for how the components of the system worked together, it’s amazing how much you can learn from digging into bug reports!

Read more
Matt Fischer

A new Nexus7 image just posted and can installed using the standard installer and install process. However before you rush out and re-install you should note the changes, only one of which requires you to do a full re-install.

The only fix you need to really reinstall for is this hostname fix:

The other changes are all in the kernel. These changes below can be installed by doing sudo apt-get update && sudo apt-get install linux-nexus7. This will upgrade you to version 3.1.10-7.11 of the kernel:

  • Enable ISO support
  • Enable NFS support
  • Add battery information (upower –dump now works)
  • Enable LXC support
  • Enable SND_USB_AUDIO, disable SND_HDA_INTEL

Over the next few weeks you should expect more changes to come through the standard apt-get upgrade process. These will include syncing the kernel config with the standard Ubuntu one and bug fixes in non-kernel packages.
We may not release a new image again until we have nightlies working. Please note that you should NOT enable any of the standard quantal archives and upgrade things from there. That will supercede some of our fixes like the ones in Nux and you will likely end up with an unusable system.

Read more
Matt Fischer

WARNING: This process has changed since the switch to Raring. I don’t yet have a new update for the process.

EDIT: I just added some notes about extracting files from the boot.img in the post.

Several people asked me some questions last week abut how the Nexus7 image is built and how they can hack it. Hopefully this post will help to answer some of those questions. Note that nothing described here is supported, it is just presented here to enable people interested in hacking the image to get going. This process requires the tools simg2img and make_ext4fs which you can download pre-compiled binaries for from here.

Hacking a pre-built image rootfs.img

    1. Take the rootfs.img file as input and use the tool simg2img to unpack it. It will be a large file when unpacked, 28G for the 32GB tablet, 13G for the 16GB tablet, and 6G for the 8GB tablet.

mfisch@caprica:~/build$ ./simg2img rootfs.img rootfs.ext4

    1. Mount the rootfs.ext4 file

mfisch@caprica:~/build$ sudo mount -o loop rootfs.ext4 tmpmnt/

    1. Inside tmpmnt, you’ll find the original rootfs.tar.gz. Copy this file out and unmount the directory. You can also remove the rootfs.ext4 file.


mfisch@caprica:~/build/tmpmnt$ cp rootfs.tar.gz ..
mfisch@caprica:~/build/tmpmnt$ cd ..
mfisch@caprica:~/build$ sudo umount tmpmnt/
mfisch@caprica:~/build$ rm rootfs.ext4

    1. Extract the rootfs.tar.gz


mfisch@caprica:~/build$ tar -xvzf rootfs.tar.gz

    1. The extracted filesystem is in ./binary/casper/filesystem.dir. You can copy files into and out of here or modify files.

Once you’re done with the changes, you need to rebuild the rootfs.img file. The first step is to re-tar and recompress the unpacked files.

mfisch@caprica:~/build$ tar -cvzf rebuilt.tar.gz binary/

From this point you can use the same process we use to build images and then flash them, following the process below.

Building or rebuilding a rootfs.img file

Given a tarball image, our image building script basically does a couple things:

  1. Extract the kernel and initrd from the rootfs.tar.gz
  2. Write a bootimg.cfg file out using the right values for the Nexus 7.
  3. Create a boot.img file using abootimg using the inputs of the kernel, the initrd, the bootimg.cfg. Note: We had to do some work here to make sure that the initrd was small. I think the limit was 2MB.
  4. Take the rootfs.tar.gz, and using the tool make_ext4fs, create a sparse ext4 filesystem and call the output rootfs.img.

We wrote a script to do all this which makes life easier. This process may change as we implement these image builds on cdimage.ubuntu.com and this script may not be updated, but it should be enough to get people hacking. If you do anything cool with this or have fixes for Ubuntu, please let me know or send a patch to one of our bugs.

Hacking/Rebuilding boot.img

After a few questions I decided to add a brief note about how to hack and rebuild the boot.img. It’s pretty simple and uses the abootimg tool which is in universe for quantal.

To extract the files, use abootimg -x

mfisch@caprica:~/upload-scripts$ abootimg -x boot.img
writing boot image config in bootimg.cfg
extracting kernel in zImage
extracting ramdisk in initrd.img

To rebuild, you see see the script I referenced above, or just run abootimg –create:

mfisch@caprica:~/upload-scripts$ abootimg --create newboot.img -k zImage -f ./bootimg.cfg -r initrd.img
reading config file ./bootimg.cfg
reading kernel from zImage
reading ramdisk from initrd.img
Writing Boot Image newboot.img

Read more
Matt Fischer

12 Months at Canonical

Last fall, I gave notice at my old job, my last day was to be October 28, and on November first, I started at Canonical.

My first day at Canonical was interesting, it began early with a drive to the airport and a flight to Orlando for UDS-P. It was a great way to start a new job, I left the early 10″ (250mm) Colorado snow behind for palm trees and meeting my new team. I met most of them while enjoying dinner and beers in the evening breeze of Orlando by the pool at the Caribe Royale.

I had to shovel my deck so I could grill when I had some of my family visit

My wife, left behind in Colorado with my parents and our kid who had just contracted Chicken Pox was less than pleased with these photos taken by the pool at the Caribe Royale

Once I returned to Colorado, I soon discovered my job at Canonical was one of a radical generalist. There had not been a single week that went by where I didn’t do something new or work on something new. Our team’s mission is to take Ubuntu and make it work great for customers on various ARM-based platforms. “Make it work great” means that almost anything you’ve ever used in Ubuntu, we’ve had to fix or tweak during one of our projects. In addition to hardware, we’ve also been able to work on some cool features, like remote login and improving test automation in checkbox, and too many more to mention here.

It’s been a fun, educational, and challenging twelve months and I hope the next twelve continue the trend.

Read more