Canonical Voices

Posts tagged with 'packaging'

James Westby

We’ve recently rolled out some changes to the submission process for Click Applications that should make it easier for you to submit new applications, and allow them to be approved more quickly.

Previously when submitting an application you would have to enter all the information about that application on the website, even when some of that information was already included in the package itself. This was firstly an irritation, but sometimes developers would make a mistake when re-entering this information, meaning that the app was rejected from review and they would have to go back and correct the mistake.

With the new changes, when you submit an application you will wait a few seconds while the package is examined by the system, and you will then be redirected to the same process as before. However this time some of the fields will be pre-filled with information from the package. You won’t have to type in the application name, as it will already be there. This will speed up the process, and should reduce the number of mistakes that happen at that stage.

We’ve also been working on a command-line interface for submitting applications. It’s not polished yet, but if you are intrepid you can try out click-toolbelt.

Read more
Michael Hall

Today I reached another milestone in my open source journey: I got my first package uploaded into Debian’s archives.  I’ve managed to get packages uploaded into Ubuntu before, and I’ve attempted to get one into Debian, but this is the first time I’ve actually gotten a contribution in that would benefit Debian users.

I couldn’t have done with without the the help and mentorship of Paul Tagliamonte, but I was also helped by a number of others in the Debian community, so a big thank you to everybody who answered my questions and walked me through getting setup with things like Alioth and re-learning how to use SVN.

One last bit of fun, I was invited to join the Linux Unplugged podcast today to talk about yesterday’s post, you can listen it it (and watch IRC comments scroll by) here: http://www.jupiterbroadcasting.com/51842/neckbeard-entitlement-factor-lup-28/

Read more
Michael Hall

Today was a distracting day for me.  My homeowner’s insurance is requiring that I get my house re-roofed[1], so I’ve had contractors coming and going all day to give me estimates. Beyond just the cost, we’ve been checking on state licensing, insurance, etc.  I’ve been most shocked at the differences in the level of professionalism from them, you can really tell the ones for whom it is a business, and not just a job.

But I still managed to get some work done today.  After a call with Francis Ginther about the API website importers, we should soon be getting regular updates to the current API docs as soon as their source branch is updated.  I will of course make a big announcement when that happens

I didn’t have much time to work on my Debian contributions today, though I did join the DPMT (Debian Python Modules Team) so that I could upload my new python-model-mommy package with the DPMT as the Maintainer, rather than trying to maintain this package on my own.  Big thanks to Paul Tagliamonte for walking me through all of these steps while I learn.

I’m now into my second week of UbBloPoMo posts, with 8 posts so far.  This is the point where the obligation of posting every day starts to overtake the excitement of it, but I’m going to persevere and try to make it to the end of the month.  I would love to hear what you readers, especially those coming from Planet Ubuntu, think of this effort.

[1] Re-roofing, for those who don’t know, involves removing and replacing the shingles and water-proofing paper, but leaving the plywood itself.  In my case, they’re also going to have to re-nail all of the plywood to the rafters and some other things to bring it up to date with new building codes.  Can’t be too safe in hurricane-prone Florida.

Read more
Michael Hall

Quick overview post today, because it’s late and I don’t have anything particular to talk about today.

First of all, the next vUDS was announced today, we’re a bit late in starting it off but we wanted to have another one early enough to still be useful to the Trusty release cycle.  Read the linked mailinglist post for details about where to find the schedule and how to propose sessions.

I pushed another update to the API website today that does a better job balancing the 2-column view of namespaces and fixes the sub-nav text to match the WordPress side of things. This was the first deployment in a while to go off without a problem, thanks to  having a new staging environment created last time.  I’m hoping my deployment problems on this are now far behind me.

I took a task during my weekly Core Apps update call to look more into the Terminal app’s problem with enter and backspace keys, so I may be pinging some of you in the coming week about it to get some help.  You have been warned.

Finally, I decided a few weeks ago to spread out my after-hours community a activity beyond Ubuntu, and I’ve settled on the Debian new maintainers Django website as somewhere I can easily start.  I’ve got a git repo where I’m starting writing the first unit tests for that website, and as part of that I’m also working on Debian packaging for the Python model-mommy library which we use extensively in Ubuntu’s Django website. I’m having to learn (or learn more) Debian packaging, Git workflows and Debian’s processes and community, all of which are going to be good for me, and I’m looking forward to the challenge.

Read more
Daniel Holbach

packaging.ubuntu.com

I’m please to announce the following changes have landed in the Ubuntu Packaging Guide:

  • The Packaging Guide is now fully translated into French! Bravo, équipe français! Thanks a lot everyone who helped out here!
  • We moved from developer.ubuntu.com/packaging to our new home http://packaging.ubuntu.com – don’t despair, redirects are in place!This was done, because developer.ubuntu.com more and more moved into the direction of delivering tutorials for people who want to create content (apps, scopes, charms, etc.) on top of Ubuntu.
    This also gives us the benefit that we don’t have to integrate the guide into a wordpress installation somehow, but maintain it all on our own.
    Thanks a lot Tom Haddon for helping set this up and Andrew Starr-Bochicchio for beautifying the landing page. Great work everyone!
  • We are finally going to get rid of the old wiki guide. Andrew had announce the move many months ago and we should now be safe to remove the content.

Our journey is far from over. If you want to help out, please do!

  • We have bugs filed against the packaging guide and need help. Some are tagged as ‘bitesize‘ already.
  • Please also help translating the guide. Many teams have already put some work into this. You can help out by either translating or reviewing translated strings.

Keep up the good work everyone. This is great! :-)

Read more
Jussi Pakkanen

A common step in a software developer’s life is building packages. This happens both directly on you own machine and remotely when waiting for the CI server to test your merge requests.

As an example, let’s look at the libcolumbus package. It is a common small-to-medium sized C++ project with a couple of dependencies. Compiling the source takes around 10 seconds, whereas building the corresponding package takes around three minutes. All things considered this seems like a tolerable delay.

But can we make it faster?

The first step in any optimization task is measurement. To do this we simulated a package builder by building the source code in a chroot. It turns out that configuring the source takes one second, compiling it takes around 12 seconds and installing build dependencies takes 2m 29s. These tests were run on an Intel i7 with 16GB of RAM and an SSD disk. We used CMake’s Make backend with 4 parallel processes.

Clearly, reducing the last part brings the biggest benefits. One simple approach is to store a copy of the chroot after dependencies are installed but before package building has started. This is a one-liner:

sudo btrfs subvolume snapshot -r chroot depped-chroot

Now we can do anything with the chroot and we can always return back by deleting it and restoring the snapshot. Here we use -r so the backed up snapshot is read-only. This way we don’t accidentally change it.

With this setup, prepping the chroot is, effectively, a zero time operation. Thus we have cut down total build time from 162 seconds to 13, which is a 12-fold performance improvement.

But can we make it faster?

After this fix the longest single step is the compilation. One of the most efficient ways of cutting down compile times is CCache, so let’s use that. For greater separation of concerns, let’s put the CCache repository on its own subvolume.

sudo btrfs subvolume create chroot/root/.ccache

We build the package once and then make a snapshot of the cache.

sudo btrfs subvolume snapshot -r chroot/root/.ccache ccache

Now we can delete the whole chroot. Reassembling it is simple:

sudo btrfs subvolume snapshot depped-chroot chroot
sudo btrfs subvolume snapshot ccache chroot/root/.ccache

The latter command gave an error about incorrect ioctls. The same effect can be achieved with bind mounts, though.

When doing this the compile time drops to 0.6 seconds. This means that we can compile projects over 100 times faster.

But can we make it faster?

At this point all individual steps take a second or so. Optimizing them further would yield negligible performance improvements. In actual package builds there are other steps that can’t be easily optimized, such as running the unit test suite, running Lintian, gathering and verifying the package and so on.

If we look a bit deeper we find that these are all, effectively, single process operations. (Some build systems, such as Meson, will run unit tests in parallel. They are in the minority, though.) This means that package builders are running processes which consume only one CPU most of the time. According to usually reliable sources package builders are almost always configured to work on only one package at a time.

Having a 24 core monster builder run single threaded executables consecutively does not make much sense. Fortunately this task parallelizes trivially: just build several packages at the same time. Since we could achieve 100 times better performance for a single build and we can run 24 of them at the same time, we find that with a bit of effort we can achieve the same results 2400 times faster. This is roughly equivalent to doing the job of an entire data center on one desktop machine.

The small print

The numbers on this page are slightly optimistic. However the main reduction in performance achieved with chroot snapshotting still stands.

In reality this approach would require some tuning, as an example you would not want to build LibreOffice with -j 1. Keeping the snapshotted chroots up to date requires some smartness, but these are all solvable engineering problems.

Read more
Daniel Holbach

The German translations team have done it! They brought the German translation of the Ubuntu Packaging Guide above 70%, which is the magic threshold for us to enable the translation in the package. Since earlier today you will find this in the Packaging Guide Daily Build PPA (soon going to land in Debian and then in Ubuntu too):

daniel@daydream:~$ apt-cache search german packaging guide ubuntu
ubuntu-packaging-guide-html-de - Ubuntu Packaging Guide - HTML guide - German version
ubuntu-packaging-guide-pdf-de - Ubuntu Packaging Guide - PDF guide - German version
ubuntu-packaging-guide-epub-de - Ubuntu Packaging Guide - EPUB guide - German version
daniel@daydream:~$

You can also check out the HTML version, single page HTML, PDF version and EPUB version on the web.

This is great news for everyone who wants to get started with Ubuntu development as it will make the first steps easier. Let’s get the translations up to 100% now! :)

Current translations stats are looking like this now:

  • Spanish (96%).
  • Brazilian Portuguese, Russian (83%).
  • German (72%).
  • Traditional Chinese (28%).
  • Japanese (14%).
  • French (10%).
  • Dutch, Indonesian (5%).
  • Chinese Hong Kong (1%).
  • Italian, Greek, Telugu, Australian English, Vietnamese, Kannada, Macedonian, Swedish, Turkish, Simplified Chinese. Latvian, Slovenian, Croatian, Hungarian, Catalan (0%).

Please help out making the guide available in your language as well. Start here.

Read more
Michael Hall

Now that Google+ has added a Communities feature, and seeing as how Jorge Castro has already created one for the wider Ubuntu community, I went ahead and created one specifically for our application developers.  If you are an existing app developer, or someone who is interested in getting started with app development, or thinking about porting an existing app to Ubuntu, be sure to join.

Google+ communities are brand new, so we’ll be figuring out how best to use them in the coming days and weeks, but it seems like a great addition.

Read more
Daniel Holbach

I’m quite happy with the progress the Packaging Guide is making. We managed to fix a bunch of bugs this cycle and most importantly we got it into Ubuntu and made it translatable. We only opened translations a couple of weeks ago, but some language teams have been hard at work:

  1. pt_BR.po (18%)
  2. ja.po (14%)
  3. ru.po (9%)
  4. es.po (5%)
  5. id.po (4%)
    de.po (4%)
  6. nl.po (1%)
  7. sv.po (0%)
    fr.po (0%)
    lv.po (0%)
    zh_TW.po (0%)
    hu.po (0%)
    ca.po (0%)

At UDS we decided that for translations which came to a percentage of completion of >= 70% we would build separate packages for those languages. Up until to that percentage we will only keep the translations in Launchpad.

This means there is still some way to go for all of us, but this is a great great step already. Thanks a lot for your hard work on this!

There are obviously many more bugs to fix and we’d love your help.

Bitesize bugs:

Make it prettier:

One bug we’d love to see some help with is #1043232 Packaging Guide FTBFS – it looks like the build fails due to Japanese translations. Right now all translations are disabled, which serves as a workaround for now.

Thanks again to everyone who helped out with the Packaging Guide. Your help has got many many contributors on their way. Keep up the good work!

Read more
Michael Hall

More than a few, actually. As part of our ongoing focus on App Developers, and helping them get their apps into the Ubuntu Software Center, we need to keep the Application Review Board (ARB) staffed and vibrant. Now that the App Showdown contest is over, we need people to step up and fill the positions of those members who’s terms are ending. We also want to grow the community of app reviewers that work with the ARB to process all of the submissions that are coming in to the MyApps portal.

ARB Membership

Two of the existing members, Bhavani Shankar and Andrew Mitchell, will be continuing to serve on the board, and Alessio Treglia will be joining them. But we still need four more members in order to fill the full 7 seats on the board.  ARB applicants must be Ubuntu Members, Ubuntu Developers, and should create a wiki page for their application.

ARB members help application developers get their apps into Software Center by reviewing their package, providing support and feedback where needed, and finally voting to approve the app’s publication.  You should be able to dedicate a few hours each week to reviewing apps in the queue, and discussing them on IRC and the ARB’s mailing list.

If you would like to apply, you can contact the current ARB members on #ubuntu-arb on Freenode IRC, or the team mailing list (app-review-board at lists.ubuntu.com).  The current term will expire at the end of the month, so be sure to get your applications in as soon as you can.

ARB Helpers

In addition to the 7 members of the ARB itself, we are building a community of volunteers to help review submitted packages, and work with the author to make the necessary changes.  There are no limits or restrictions on members of this community, though a rough knowledge of packaging will surely help.  This group doesn’t vote on applications, but they are essential to helping get those applications ready for a vote.

The ARB helpers community was launched in response to the overwhelming number of submissions that came in during the App Showdown competition.  Daniel Holbach put together a guide for new contributors to help them get started reviewing apps, and you can still follow those same steps if you would like to help out.

Again, if you would like to get involved with this community, you should join #ubuntu-arb on Freenode IRC, or contact the mailing list (app-review-board at lists.ubuntu.com).

Read more
Michael Hall

Due to the popularity of the Ubuntu App Showdown Workshops, I plan to start holding a weekly Q&A session for all Ubuntu app developers using the same format: A live Google+ Hangout with IRC chat.

The first of these will be Wednesday of this week, at 1700 UTC (6pm London, 1pm US Eastern, 10am US Pacific).  Because it will be an On-Air hangout, I won’t have a link until I start the session, but I will post it here on my blog before it starts.  For IRC, I plan on using the #ubuntu-on-air channel on Freenode, though again the exact details will be posted the day of the session.

So bring your questions about developing apps for Ubuntu, packaging an submitting them to the Software Center.  If I can’t answer your question myself, I’ll help you find someone who can.

Read more
Michael Hall

Last week we introduced a new ‘Download for Ubuntu’ campaign for upstreams to use on their websites, letting their users know that the app is available in Ubuntu already.  We event generated a list of targeted upstreams we wanted to reach out to in order to spur the adoption of these buttons.  What we didn’t go into much detail about why upstreams should use them.  I hope to remedy that here.

It’s easy

Let’s just get that out of the way, this won’t take a significant amount of work on the part of an upstream.  It’s just a one time change to a website.  You don’t even need to change it every cycle, since the buttons point to the App Directory entry for the application itself, not any specific version of it.

It makes installing your app more appealing

The button isn’t just another way of getting your app, it also tells the user that it will install correctly, all of it’s dependencies are available and will be installed, everything is configured to work with their system, and they will get be getting updates and security fixes to it through a mechanism they already use and trust.  In short, it’s a promise of a good user experience (which I’ll admit we don’t always live up to, more on that below).  Telling 20 million users (and growing) that your app is safe and easy to install is surely worth a few pixels on your website.

It’s good social exposure for your app

By sending users to the App Directory, instead of just immediately installing, new users get to see what others are saying about your app through the ratings and reviews (which will be mostly positive, because your app is awesome right?)  of other Ubuntu users.  Not only does this tell your users that other people like your app, but it’s also telling them that they can add their own ratings and reviews, which will in turn boost your app’s standing.  More reviews leads to more users, which leads to more reviews, it’s a great positive feedback loop.

Users will be looking for it

Not right now, obviously, since we just started this campaign.  But as more upstreams adopt the new button, it’s going to be one of the first things Ubuntu users will be looking for on your website (for all the reasons mentioned above).  With a majority of website visitors leaving in less than a minute (according to a lazy Google search), the promise of a quick and easy install might just be the difference between a new user and a lost opportunity.

This campaign benefits everybody: end users, upstream developers and, yes, Ubuntu too.  So let’s improve these ties, together.  If you’re an upstream, you can copy/paste the following HTML snippet directly into your website (replacing {{pkgname}} with the name of your application’s package in Ubuntu).  If you want to reach out to an upstream developer, please add them to our list so we know who’s contacting them, and what the status is.

<a href="https://apps.ubuntu.com/cat/applications/{{pkgname}}/">
 <img src="http://developer.ubuntu.com/wp-content/uploads/2012/06/downloadonubuntubutton.png"  title="Download for Ubuntu" alt="Download for Ubuntu button" width="122" height="49" />
</a>

Now I know we can’t always give the best user experience possible (see, I told you I’d get to that).  Sometimes our packagin isn’t quite right, or the default configuration of your app is sub-optimal.  Our six month release cadence and package freezes mean that rapidly developing applications will often be out of date in our main repositories.  We’ve taken on a lot of work by distributing apps the way we do, and even though we’re a very large community, it’s still hard to get every package right.  Luckily, you’re not powerless here, if you spot problems with the way we distribute your app, or you need to get a newer version out to Ubuntu users, you can do something about that.

Package fixes

Even though our process locks applications to the version in the archives for that particular release of Ubuntu, we will still allow changes to the packaging itself.  So if we’ve done something wrong on our end that is giving your app a hard time, we’ll fix it and make that available to all of your Ubuntu users as a Stable Release Update.

Backport newer versions

A six-month release cycle means that every Ubuntu release has relatively up to date versions of applications, at least compared to distros that have a longer cadence.  But for rapidly developed applications, where new versions come out more frequently than that, this means their packages can become outdated quickly.  And with the five year lifetime of our LTS releases, most packages will get to be stale by the end.  That’s why we have a special repository just for backporting new versions of packages to stable releases of Ubuntu. And starting with 11.10, this repository is enabled by default.

In order to have your application backported to a stable release, it first has to be accepted into the current development release.  If your new version was in Debian’s unstable repository at the beginning of the development cycle, chances are it’s already there.  If it’s not in Debian you’ll need to submit your package to be included in the development release.  Once it’s there, you can request that it be backported to one more more stable Ubuntu releases.  You can use the requestbackport command line tool (from ubuntu-dev-tools package) to automate much of the process, or if you’re not running Ubuntu simply file a bug to start the request.

Read more
Michael Hall

Expanding on my previous post calling for pkgme backend contributors, here’s a list of the backends we would like to see added, and who in the community you can contact for help in making them. If you can act as a mentor for one of these backends, please say so in the comments and I will add your name to the list.    For any questions about pkgme itself, and what options are available to backends, your best bet is to ask James Westby (james_w) in the #pkgme channel on freenode.

Qt/qmake

QMake is a Makefile-generator. It uses information that the application author puts into a project file to build the Makefile for a project. A qmake backend would need to either use qmake to extract the information requested by pkgme, or parse the same project file that qmake uses in order to provide that information.

Information about qmake: http://qt-project.org/doc/qt-4.8/qmake-manual.html

Help Contact: Angelo Compagnucci

Flash

Flash applications can be packaged for Ubuntu by wrapping them in a GTK window that contains a Webkit browser widget, and an index.html file for it to load that embeds the given flash file.

The Quickly Flash template currently does much the same thing. To do the same in pkgme, you will need to pass the necessary wrapper files to the extra_files request. extra_files should return a JSON object where the key is the file path relative to the root of the target application, and the value is the contents of that file.

Help Contacts: Michael Terry and Stuart Langridge

HTML5

A backend for an HTML5 application would also require wrapping the target application in a GTK window with embedded Webkit widget. Only instead of creating an index.html, you would just point the Webkit widget to the target application’s HTML files.

Help Contact: Didier Roche

Java

The Java backend would need to parse ant’s build.xml files to extract information about the target application or an already built jar file’s manifest.

Help Contact: James Page

Read more
Michael Hall

pkgme is a small utility created by James Westby, its purpose is to create a Debian package for any unpackaged applications.  It’s currently used when applications are submitted through the Ubuntu Developer Portal as tarballs, inspecting the contents of the application to determine how to build a package from it.  In order to support many different types and configurations of application, James built pkgme to support any number of different backends.

Currently there is support for apps using Python and Distutils, apps compiled by cmake, and apps written in Vala.  But there are still many, many applications out there that aren’t covered by these backends, including Qt apps, HTML5 apps, Flash apps and more.  That’s where you, dear contributor, come in.

UPDATE: Here is a list of desired backends and mentors to help you with them.

But I don’t know how to create packages!

That’s okay, you don’t need to know how to make packages to create a pkgme backend.  It already knows how to make packages, what it doesn’t know is where to find the information it needs to do that.  This is what backends are, just one or more small scripts that extract enough information about a project to let pkgme do its thing.

Ok, I’m interested, how do I start?

First of all, get a copy of the latest pkgme code from its bazaar branch in Launchpad:

bzr branch lp:pkgme ./pkgme

Then, create a VirtualEnv environment to install it into:

virtualenv ./env

Then, install it into the Virtualenv:

source ./env/bin/activate
cd ./pkgme
python setup.py develop

Now you’ve got a working pkgme installed and running in your virtualenv. You can leave your virtualenv by running ‘deactivate’.  Time to get started on your backend!

Where do I put my new backend code?

Since we’re going to submit your new backend to the pkgme branch, we can just create it there:

cd ..
mkdir ./pkgme/pkgme/backends/<your backend name>

Great, now I have an empty Backend, what do I put here?

The first thing your backend needs is a ‘want’ file.  You see, in order for pkgme to know which backend it should use on any particular application, it needs to ask every backend how much they want it.  It does this by executing a script named ‘want’ in each backend.

Your want file is executed from the target application’s directory, so in your script ./ will be the root of the target application’s directory.  This lets you script easily browse through the files in the application to determine how well it can provide packaging information for it.

In order to tell pkgme how much your backend wants to handle the target application, your ‘want’ file simply needs to print a number to STDOUT.  The backend with the highest number is the one pkgme will use.  These are the suggested ranges for your ‘want’ value:

  • 0 – no information can be provided about the project (e.g. a Ruby backend with a Python project).
  • 10 – some information can be provided, but the backend is generic (e.g. Ruby backend).
  • 20 – some information can be provided, and the backend is more generic than just language (e.g. Ruby on Rails backend).
  • 30 – some information can be provided, and the backend is highly specialized.

Now I have what I want, what do I do with it?

Once pkgme has chosen your backend to use against an application, it will call one or more scripts from your backend to get information about the application.  As the backend author, you can choose to provide separate scripts for each piece of information, or you can provide just a single script called ‘all_info’ that will provide everything.

Lots of scripts

For separate scripts, you will need to provide an executable in your backend directory for each of the pieces of information that pkgme might request.  Each script should print that information to STDOUT, or exit with an error if it can not provide it.

Just one script

However, if looking up bits of information one at a time is a time-consuming task for your backend, you can do it all in one shot.  If you want to do that, then the only script you need is one called ‘all_info’.  When this script is called, it is also given a JSON list on STDIN.  This list contains the keys for all the pieces of information that pkgme needs from your backend.  As output, this scripts needs to print a JSON dictionary to STDOUT.  This dictionary should contain a key for each of the fields sent as input, along with its corresponding value.  If your backend can’t provide a value for one of those fields, it should be left out of the dictionary.

You can test your new backend by switching to the directory of a project your backend is made to support, and running:

pkgme

Make sure your virtualenv is still activated, or pkgme won’t be found. If everything works, you should have a ./debian/ directory in the application’s root folder.

Hurray, my backend works.  Do you want it?

Of course we want it!  What a silly question.  And it’s already in your local branch of pkgme too!  Well, it’s in the directory anyway, you still need to add it to the workingset:

cd ./pkgme/pkgme/backends/
bzr add <your backend name>

Then commit your changes and push them back to Launchpad:

bzr commit -m "Added backend for <your backend name>"
bzr push lp:~<your lp username>/pkgme/add-backend-<your backend name>

Then head on over to https://code.launchpad.net/pkgme, click on your new branch name, and then click the “propose for merging” link.  Fill out the description of what your backend adds, and submit it.  From there it will get reviewed by one of pkgme’s maintainers, and either get merged into the main branch, or sent back to you for fixes.

Read more
Michael Hall

My big focus during the week of UDS will be on improving our Application Developer story, tools and services.  Ubuntu 12.04 is already an excellent platform for app developers, now we need to work on spreading awareness of what we offer and polishing any rough edges we find.  Below are the list of sessions I’ll be leading or participating in that focus on these tasks.

And if you’re curious about what else I’ll be up to, my full schedule for the week can be found here: http://summit.ubuntu.com/uds-q/participant/mhall119/

Read more
Michael Hall

Everybody knows that programmers can contribute to Unity, and I’ve shown in my previous posts that non-developers can still contribute features and fixes that make applications integrate better.  But what if your skills lay more on the creative side of the spectrum?

Well it just so happens that you have something to contribute to Unity too.  In fact, we’re currently in need of some graphic design talent to put some extra polish on some areas of application integration.  Specifically, we need people to help create vector art for application icons that only have raster images, PNG, XPM, etc.

This wiki page contains a list of applications that have been identified as needing an SVG icon.

Now graphic creation isn’t my specialty, so I’m not going to write a step by step guide to creating these images, that’s up to you artists.  What I am going to do, however, is walk you through the process of coordinating with the upstream application developers and submitting your finished image to Ubuntu.

1) Contact the upstream

This is an important step, because even if an application doesn’t have an SVG icon in Ubuntu, there’s still a chance that one already exists.  Read over the first half of my post on upstreaming Quicklists for ways to get in contact with with them. Ask them if they have an SVG  source for their application’s icon.  If they do, that’s great! You can take that and skip down to step #3.  If they don’t, then you will need to work with the upstream project to create one that is right for them.

2) Work with the current image

It’s important that we don’t try and re-brand an application unless the authors want it re-branded.  What we want is a more flexible/scalable version of the image icon we already have.  If you are creating a new SVG file, try to keep as close to the raster image as possible, and be sure to talk to the upstream developers about any deviations or changes you need to make.  And finally, keep with the spirit of open source and make your new image available to both Ubuntu and the upstream project under a copy-left license like the CC-BY-SA or another permissive license of the upstream’s preference.

3) Preparing your image

Since we are getting close to the release of 12.04, the requirements for any further changes are getting stricter.  In order to get your image into the Precise packages, you will need to meet the following two criteria:

It must be approved by the upstream project.  Since your image will be representing their application in Ubuntu, we absolutely need their acceptance of it before it can be used.  This is why step #1 is so vitally important, make sure you are working and communicating closely with upstream from the very beginning.

It must be a plain SVG file.  This is because it will be added as a patch file against the package, and patch files don’t work well with binary data.  Since a plain SVG file is text, not binary, it makes it much easier to convert into a patch.

4) Submit your new image

The wiki page containing the list of applications has a link to the corresponding bug report filed in Launchpad.  When your image is ready, attach it to the bug report.

You will also need to add the upstream project to the bug report.  Click the “Also affects project” link on the bug page, and choose the Launchpad Project that matches your upstream.

That’s it!  Well, almost.  Once we have your image, the application’s package in Ubuntu will need to be updated to use it, but that will require some changes to packaging scripts and patch files, which will be the subject of a more technical post.  But getting the necessary image is itself a big step.

Read more
Michael Hall

Bazaar is a great tool for distributed development, but distros are built on packages, and so packages are what distro developer care about.  That’s why many of you who have followed my previous blogs have probably been asked for patches to the package itself, not to the bzr branch.

Why the difference?  Well for package maintainers, it’s easier and faster to import  upstream changes if they keep their source code clean.  To do that, any changes made by the distro are applied on top of the unmodified upstream code in the form of patches.  There are many tools designed specifically to make this easy for the package maintainers.

Below I’m going to show you how to turn your code change into a package patch that is easy for Ubuntu developers to add to the distro’s packages.  Only do this if your submitted branch is to a package in main and it hasn’t already been merged.

0) Check your source package format

The following instructions will only work on source packages using quilt 3.0 for managing patches.  Before you do anything else, check that the file debian/source/format contains the following:

3.0 (quilt)

 

1) Find your revisions

Starting from your existing code branch, we first need to identify which revisions in your branch we need to turn into a patch.  To do that, we simply check for revisions in your branch that don’t exist in the main one.  Here is what I used for geany:

bzr missing --mine-only ubuntu:geany

You just need to replace ‘geany’ with your application’s branch name (the same you bzr branched in my earlier articles).  The –mine-only will limit the result to only revisions in your branch just to keep things simple.  You’ll want to make note of the first and last revisions in this output.  If, like me, you only had one revision missing, that makes it even easier.

 

2) Generate the patch

Fortunately the package “bzr-builddeb” provides a command that makes this step easy.

mkdir -p debian/patches
bzr dep3-patch -d ubuntu:geany . > debian/patches/add_keywords.patch

Again, just replace ‘geany’ with your application’s branch name, and dep3-patch will find the differences in your branch and convert them into a patch file.

Now that you have a patch file, we need to add it to the list of patches for this package.  To do that, all you need is to add it’s name to the end of the debian/patches/series file like this:

echo add_keywords.patch >> debian/patches/series

 

3) Convert your source changes

Now that your changes are in a patch file, we need remove those changes from the source code itself.  This is where those revision numbers from step 1 come in, you will need the highest revision number and one less than the lowest.  Since I only had one revision, rev 32, my numbers are 32 and 31.

bzr diff -r 32..31 | bzr patch

This causes bzr to generate a reverse-diff of your changes (by going from the higher to the lower revision), and then apply that reverse-diff to your current code, effectively undoing your changes.

Now you need to apply your new patch file using quilt, so that quilt knows about it:

quilt push -a

Which should give you the following output if everything applies cleanly (if not, then your package is going to need some extra work, and you should ask for help from someone in #ubuntu-devel on freenode IRC).

Applying patch add_keywords.patch
patching file geany.desktop.in

Now at patch add_keywords.patch

 

4) Log your changes

Since you are making changes to the package itself now, you need to add that information to the debian/changelog:

export DEBFULLNAME="Michael Hall"
export DEBEMAIL="mhall119@ubuntu.com"
dch -i

You will, of course, want to replace my name and email with your own (Hint: you can put those 2 export lines into ~/.bashrc for future packaging work). This will create a new entry in the chanelog for you, with one higher version number.  All you need to do it add in the comments:

* Add search keywords to .desktop file (LP: #942154)

Be sure to use the proper bug number for your changes.  Also, if you are not running on Precise, you  will need to change the release target at the top of the file to ‘precise’.  Here’s what my new record looks like:

geany (0.21.dfsg-1ubuntu4) precise; urgency=low

* Add search keywords to .desktop file (LP: #942154)

-- Michael Hall <mhall119@ubuntu.com> Wed, 07 Mar 2012 14:40:32 -0500

 

5) Commit and push

Now it’s time to put everything back into your bzr branch.  First you need to add your patch file:

bzr add debian/patches/add_keywords.patch
bzr add debian/patches/series
bzr add .pc/

If your package branch didn’t already have a ‘series’ file, my instructions in step 2 will have created one, so I’m adding it here just in case.  If it already existed, bzr add won’t do anything.

Next, commit and push your changes back to your submitted branch:

bzr commit -m "Convert source changes into a package patch file"
bzr push lp:~mhall119/ubuntu/precise/geany/add_keywords

 

Read more
Martin Pool

Jelmer writes:

bzr-builddeb 2.8.1 has just landed on Debian Sid and Ubuntu Precise. This version contains some of my improvements from late last year for the handling of quilt patches in packaging branches. Most of these improvements depend on bzr 2.5 beta 5, which is also in Sid/Precise.

The most relevant changes (enabled by default) are:

  • ‘bzr merge-package’ is now integrated into ‘bzr merge’ (it’s just a hook that fires on merges involving packages)
  • patches are automatically unapplied in relevant trees before merges
  • before a commit, bzr will warn if you have some applied and some unapplied quilt patches

Furthermore, you can now specify whether you would like bzr to automatically apply all patches for stored data and whether you would like to automatically have them applied in your working tree by setting ‘quilt-tree-policy‘ and ‘quilt-commit-policy‘ to either ‘applied‘ or ‘unapplied‘. This means that you can have the patches unapplied in the repository, but automatically have them applied upon checkout or update. Setting these configuration options to an empty string causes bzr to not touch your patches during commits, checkout or update.

We’ve done some testing of it, as well as running through a package merge involving patches with Barry, but none of us do package merges regularly. If you do run into issues or if you think there are ways we can improve the quilt handling further, please comment here or file a bug report against the UDD project.

Caveats:

  • If there are patches to unapply for the OTHER tree, bzr will currently create a separate checkout and unapply the patches there. This may have performance consequences for big packages. The best way to prevent this is to set ‘quilt-commit-policy = unapplied‘.
  • bzr merge‘ will now fail if you are merging in a packaging tree that is lacking pristine tar metadata; I’m submitting a fix for this, but it’s not in 2.8.1.
  • if you set ‘quilt-commit-policy‘ and ‘quilt-tree-policy‘ but have them set to a different value, bzr will consider the tree to have changes.

To disable the automatic unapplying of patches and fall back to the previous behaviour, set the following in your builddeb configuration:

quilt-smart-merge = False

Read more
Barry Warsaw

sbuild is an excellent tool for locally building Ubuntu and Debian packages.  It fits into roughly the same problem space as the more popular pbuilder, but for many reasons, I prefer sbuild.  It's based on schroot to create chroot environments for any distribution and version you might want.  For example, I have chroots for Ubuntu Oneiric, Natty, Maverick, and Lucid, Debian Sid, Wheezy, and Squeeze, for both i386 and amd64.  It uses an overlay filesystem so you can easily set up the primary snapshot with whatever packages or prerequisites you want, and the individual builds will create a new session with an overlaid temporary filesystem on top of that, so the build results will not affect your primary snapshot.  sbuild can also be configured to save the session depending on the success or failure of your build, which is fantastic for debugging build failures.  I've been told that Launchpad's build farm uses a customized version of sbuild, and in my experience, if you can get a package to build locally with sbuild, it will build fine in the main archive or a PPA.

Right out of the box, sbuild will work great for individual package builds, with very little configuration or setup.  The Ubuntu Security Team's wiki page has some excellent instructions for getting started (you can stop reading when you get to UMT :).

One thing that sbuild doesn't do very well though, is help you build a stack of packages.  By that I mean, when you have a new package that itself has new dependencies, you need to build those dependencies first, and then build your new package based on those dependencies.  Here's an example.

I'm working on bug 832864 and I wanted to see if I could build the newer Debian Sid version of the PySide package.  However, this requires newer apiextractor, generatorrunner, and shiboken packages (and technically speaking, debhelper too, but I'm working around that), so you have to arrange for the chroot to have those newer packages when it builds PySide, rather than the ones in the Oneiric archive.  This is something that PPAs do very nicely, because when you build a package in your PPA, it will use the other packages in that PPA as dependencies before it uses the standard archive.  The problem with PPAs though is that when the Launchpad build farm is overloaded, you might have to wait several hours for your build.  Those long turnarounds don't help productivity much. ;)

What I wanted was something like the PPA dependencies, but with the speed and responsiveness of a local build.  After reading the sbuild manpage, and "suffering" through a scan of its source code (sbuild is written in Perl :), I found that this wasn't really supported by sbuild.  However, sbuild does have hooks that can run at various times during the build, which seemed promising.  My colleague Kees Cook was a contributor to sbuild, so a quick IRC chat indicated that most people create a local repository, populating it with the dependencies as you build them.  Of course, I want to automate that as much as possible.  The requisite googling found a few hints here and there, but nothing to pull it all together.  With some willful hackery, I managed to get it working.

Rather than post some code that will almost immediately go out of date, let me point you to the bzr repository where you can find the code.  There are two scripts: prep.sh and scan.sh, along with a snippet for your ~/.sbuildrc file to make it even easier.  sbuild will call scan.sh first, but here's the important part: it calls that outside the chroot, as you (not root). You'll probably want to change $where though; this is where you drop the .deb and .dsc files for the dependencies.  Note too, that you'll need to add an entry to your /etc/schroot/default/fstab file so that your outside-the-chroot repo directory gets mapped to /repo inside the chroot.  For example:

# Expose local apt repository to the chroot
/home/barry/ubuntu/repo    /repo    none   rw,bind  0 0
An apt repository needs a Packages and Packages.gz file for binary packages, and a Sources and Sources.gz file for the source packages.  Secure APT also requires a Release and Release.gpg file signed with a known key.  The scan.sh file sets all this up, using the apt-ftparchive command.  The first apt-ftparchive call creates the Sources and Sources.gz file.  It scans all your .dsc files and generates the proper entries, then creates a compressed copy, which is what apt actually "downloads".  The tricky thing here is that without changing directories before calling apt-ftparchive, your outside-the-chroot paths will leak into this file, in the form of Directory: headers in Sources.gz.  Because that path won't generally be available inside the chroot, we have to get rid of those headers.  I'm sure there's an apt-ftparchive option to do this, but I couldn't find it.  I accidentally discovered that cd'ing to the directory with the .dsc files was enough to trick the command into omitting the Directory: headers.

The second call to apt-ftparchive creates the Packages and Packages.gz files.  As with the source files, we get some outside-the-chroot paths leaking in, this time as path prefixes to the Filename: header value.  Again, we have to get rid of these prefixes, but cd'ing to the directory with the .deb files doesn't do the trick.  No doubt there's some apt-ftparchive magical option for this too, but sed'ing out the paths works well enough.

The third apt-ftparchive file creates the Release file.  I shameless stole this from the security team's update_repo script.  The tricky part here is getting Release signed with a gpg key that will be available to apt inside the chroot.  sbuild comes with its own signing key, so all you have to do is specify its public and private keys when signing the file.  However, because the public file from
/var/lib/sbuild/apt-keys/sbuild-key.pub
won't be available inside the chroot, the script copies it to what will be /repo inside the chroot.  You'll see later how this comes into play.

Okay, so now we have the repository set up well enough for sbuild to carry on.  Later, before the build commences, sbuild will call prep.sh, but this script gets called inside the chroot, as the root user.  Of course, at this point /repo is mounted in the chroot too.  All prep.sh needs to do is add a sources.list.d entry so apt can find your local repository, and it needs to add the public key of the sbuild signing key pair to apt's keyring.  After it does this, it needs to do one more apt-get update.  It's useful to know that at the point when sbuild calls prep.sh, it's already done one apt-get update, so this does add a duplicate step, but at least we're fortunate enough that prep.sh gets called before sbuild installs all the build dependencies.  Once prep.sh is run, the chroot will have your overriding dependent packages, and will proceed with a normal build.

Simple, huh?

Besides getting rid of the hackery mentioned above, there are a few things that could be done better:
  • Different /repo mounts for each different chroot
  • A command line switch to disable the /repo
  • Automatically placing .debs into the outside-the-chroot repo directory

Anyway, it all seems to hang together.  Please let me know what you think, and if you find better workarounds for the icky hacks.
 

Read more
Barry Warsaw

So, yesterday (June 21, 2011), six talented and motivated Python hackers from the Washington DC area met at Panera Bread in downtown Silver Spring, Maryland to sprint on PEP 382. This is a Python Enhancement Proposal to introduce a better way for handling namespace packages, and our intent is to get this feature landed in Python 3.3. Here then is a summary, from my own spotty notes and memory, of how the sprint went.

First, just a brief outline of what the PEP does. For more details please read the PEP itself, or join the newly resurrected import-sig for more discussions. The PEP has two main purposes. First, it fixes the problem of which package owns a namespace's __init__.py file, e.g. zope/__init__.py for all the Zope packages. In essence, it eliminate the need for these by introducing a new variant of .pth files to define a namespace package. Thus, the zope.interfaces package would own zope/zope-interfaces.pth and the zope.components package would own zope/zope-components.pth.  The presence of either .pth file is enough to define the namespace package.  There's no ambiguity or collision with these files the way there is for zope/__init__.py.  This aspect will be very beneficial for Debian and Ubuntu.

Second, the PEP defines the one official way of defining namespace packages, rather than the multitude of ad-hoc ways currently in use.  With the pre-PEP 382 way, it was easy to get the details subtly wrong, and unless all subpackages cooperated correctly, the packages would be broken.  Now, all you do is put a * in the .pth file and you're done.

Sounds easy, right?  Well, Python's import machinery is pretty complex, and there are actually two parallel implementations of it in Python 3.3, so gaining traction on this PEP has been a hard slog.  Not only that, but the PEP has implications for all the packaging tools out there, and changes the API requirements for PEP 302 loaders.  It doesn't help that import.c (the primary implementation of the import machinery) has loads of crud that predates PEP 302.

On the plus side, Martin von Loewis (the PEP author) is one of the smartest Python developers around, and he's done a very good first cut of an implementation in his feature branch, so there's a great place to start.

Eric Smith (who is the 382 BDFOP, or benevolent dictator for one pep), Jason Coombs, and I  met once before to sprint on PEP 382, and we came away with more questions than answers.  Eric, Jason, and I live near each other so it's really great to meet up with people for some face-to-face hacking.  This time, we made a wider announcement, on social media and the BACON-PIG mailing list, and were joined by three other local Python developers.  The PSF graciously agreed to sponsor us, and while we couldn't get our first, second, third, or fourth choices of venues, we did manage to score some prime real-estate and free wifi at Panera.

So, what did we accomplish?  Both a lot, and a little.  Despite working from about 4pm until closing, we didn't commit much more than a few bug fixes (e.g. an uninitialized variable that was crashing the tests on Fedora), a build fix for Windows, and a few other minor things.  However, we did come away with a much better understanding of the existing code, and a plan of action to continue the work online.  All the gory details are in the wiki page that I created.


One very important thing we did was to review the existing test suite for coverage of the PEP specifications.  We identified a number of holes in the existing test suite, and we'll work on adding tests for these.  We also recognized that importlib (the pure-Python re-implementation of the import machinery) wasn't covered at all in the existing PEP 382 tests, so Michael worked on that.  Not surprisingly, once that was enabled, the tests failed, since importlib has not yet been modified to support PEP 382.


We also came up with a number of questions where we think the PEP needs clarification.  We'll start discussion about these on the relevant mailing lists.


Finally, Eric brought up a very interesting proposal.  We all observed how difficult it is to make progress on this PEP, and Eric commented on how there's a lot of historical cruft in import.c, much of which predates PEP 302.  That PEP defines an API for extending the import machinery with new loaders and finders.  Eric proposed that we could simplify import.c by removing all the bits that could be re-implemented as PEP 302 loaders, specifically the import-from-filesystem stuff.  The other neat thing is that the loaders could probably be implemented in pure-Python without much of a performance hit, since we surmise that the stat calls dominate. If that's true, then we'd be able to refactor importlib to share a lot of code with the built-in C import machinery.  This could have the potential to greatly simplify import.c so that it contains just the PEP 302 machinery, with some bootstrapping code.  It may even be possible to move most of the PEP 382 implementation into the loaders.  At the sprint we did a quick experiment with zipping up the standard library and it looked promising, so Eric's going to take a crack at this.


This is briefly what we accomplished at the sprint.  I hope we'll continue the enthusiasm online, and if you want to join us, please do subscribe to the import-sig!

Read more