Canonical Voices

What Sidnei da Silva talks about


Since early August I’ve been looking at improving the state of the art in container cloning with an ultimate goal of making Juju faster when used in a local provider context, which happens to be where I spend most of my days lately. For those who don’t know, the local provider in Juju uses LXC with the LXC-provided ‘ubuntu-cloud’ template in order to provide an environment that’s as similar as possible to provisioning a cloud instance elsewhere.

In looking at the various storage backends supported by LXC and experimenting with each of them, I’ve stumbled on various issues, from broken inotify support in overlayfs to random timing issues deleting btrfs snapshots. Eventually I’ve discovered the holy grail of LVM Thin Provisioning and started working on a patch to LXC which would allow it to create volumes in a thin provisioned pool. In the meantime, Docker announced their intent of adding support for LVM Thin Provisioning too. I’m happy to announce that the work I started in August is now merged into LXC as various separate pull requests (#67#70#72#74) and is already available in the LXC Daily PPA. I’ve even created a Gist showing how to use it.

As pointed out by a colleague today, Thin Provisioning support should soon land in Docker itself. I applaud the initiative, and am really looking forward to seeing this feature land in Docker. I wish though there was a little more coordination going on to get that work upstreamed into LXC instead, to the benefit of everyone. Regardless, I’m committed to make sure that the differences between whatever ends up landing in Docker and what I’ve added to LXC eventually converge. One such difference is that I’ve simply shelled out to the ‘lvs’ command while Alexander is using libdevmapper directly, something I’ve looked at doing but felt a little over my head. I’ve also haven’t got around to making the filesystem default to ext4 with DISCARD support yet, but that’s on the top of my list.

So without much ado, let’s look at an example with a loopback-mounted device backed by a sparse-allocated file.

$ sudo fallocate -l 6G /tmp/lxc
$ sudo losetup -f /tmp/lxc
$ sudo pvcreate /dev/loop0
$ sudo vgcreate lxc /dev/loop0
Nothing special so far, simply a 6G sparse file, mounted loopback and then a VG named ‘lxc’ is created within it. Now the next step, creating the Thin Provisioning Pool. The size of the LV cannot be specified as 100%FREE because some space needs to be left for the LV metadata:
$ sudo lvcreate -l 95%FREE --type thin-pool --thinpool lxc lxc
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lxc lxc twi-a-tz- 5.70g 0.00
Now if you have a recent enough LXC as the one from the PPA above, creating a LVM-backed LXC container should default to creating a Thin Provisioned volume on the thinpool named ‘lxc’, so the command is pretty simple:
$ sudo lxc-create -n precise-template -B lvm --fssize 8G -t ubuntu-cloud -- -r precise
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lxc lxc twi-a-tz- 5.70g 17.38
precise-template lxc Vwi-a-tz- 7.81g lxc 12.67
If you wanted to use a custom named thin pool, that’s also possible by specifying the ‘–thinpool’ command line argument.
Now, how fast is it to create a clone of this container? Let’s see.
$ time sudo lxc-clone -s precise-template precise-copy
real 0m0.276s
user 0m0.021s
sys 0m0.085s

Plenty fast I say. What does the newly created volume looks like?

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lxc lxc twi-a-tz- 5.70g 17.44
precise-copy lxc Vwi-a-tz- 7.81g lxc precise-template 12.71
precise-template lxc Vwi-a-tz- 7.81g lxc 12.67

Not only cloning the container was super fast, but it also barely used any new data in the ‘lxc’ thin pool.

The result of this work should soon make it’s way into Juju and, I hope, Docker, either via Alexander’s work or some other way. If you’re into Docker or a more hardcore LXC user, I hope you enjoy this feature and make good use of it.

Read more

As part of my job as Operations Engineer on the Ubuntu One team, I’m constantly looking for ways to improve the reliability and operational efficiency of the service as a whole.

Very high on my list of things to fix I have an item to look into Vaurien, and make some tweaks to the service to cope better with outages in other parts of the system.

As you probably realized by now if you’re somehow involved into the design and maintenance of any kind of distributed systems, network partitions are a big deal, and we’ve had some of those affect our service in the past, some with very interesting effects.

Take our application servers for example. They’ve been through many generations of rewrites, and switches from one WSGI server to another in the past (not long before I joined the team), each of them with a particular issue. Either they didn’t scale, or crashed constantly, or had memory leaks. Or maybe none of those (it was before I joined the team, so I wouldn’t know for sure). By the time I joined, Paste, of all the things, was one of the WSGI servers in use in part of the service, and Twisted WSGI was used in another part.

(The actual setup of those services is very interesting on itself. It’s a mix of Twisted and Django (and many others have done this before, so it’s not very unique. But there are internal details which are quite interesting. More below.)

Having moved from another team that used Twisted heavily, I decided to call it out and settle on Twisted WSGI, which seemed just fine.

As for the stability and memory issues, we started ironing them out one by one. Turns out the majority of the problems had nothing to do with the WSGI server itself, but everything to do with not cleaning up resources correctly, be it temporary files, file descriptors, and cycles between producers and consumers.

And everything was perfect.

But then we got a few networking issues and hardware issues. Some of the servers were eventually moved to a different datacenter and things got even more interesting. I’ll go into the details of the specific problems that I’m hoping to approach with Vaurien on a different post, but suffice to say that talking to many external services in a threaded server doesn’t get pretty when there’s a network blip.

So speaking of threaded and Twisted, and coming to the subject of this post.

In front of a subset of our services we currently have 4 HAProxy instances in different servers. They are all set up to use httpchk every 2 seconds, which by default sends an OPTIONS request to ‘/’. If you’re still following, we have a Django app running there, and depending on how you have your Django app configured, it might just take that OPTIONS request and treat it just like a GET, effectively (in our case) rendering a response just as if a normal browser had requested it. Turns out that page is not particularly lean in our case.

So you take 4 servers, effectively doing a GET request to your homepage every 2s each one, times many processes serving that page across a couple hosts, and you have a full plate for someone looking for things to optimize.

To make it more fun, early on I added monitoring of the thread pool used by Twisted WSGI, sending metrics to Graphite. Whenever we had a network blip we saw the queue growing and growing without bound. This was actually a combination of a couple things, which I’m still working on fixing:

  1. HAProxy will keep triggering the httpchk after the service is taken out of rotation.
  2. Twisted WSGI will keep accepting requests and throwing them in the thread pool queue, even if the thread pool is busy and the queue is building up.
  3. We do a terrible job at timing out connections to external services currently so a minor blip can easily cause the thread pool queue to build up.

As a strategy to alleviate that problem I came up with the following solution:

  1. Implement a custom thread pool that triggers a callback when going from busy -> free and from free -> busy (where busy is defined as: there are more requests queued than idle threads).
  2. Changed the response to the HAProxy httpchk to simply check that busy/free state.
  3. Changed the handling of that HAProxy check to *not* go through the thread pool.

(There’s a few more details that I won’t get into in this post, but that’s the high-level summary.)

I have good confidence that this will fix (or at least alleviate) the main issue, which is the queue growing without bounds in the thread pool, and it will instead move the queueing to HAProxy. But after looking through the metrics today I saw an unintended consequence of the changes.



In retrospect, it seems fairly obvious that this was to be one of the expected outcomes. I was simply surprised to see it since it was not the immediate goal of the proposed changes, but simply a side effect of them.

I hope you enjoyed this glimpse into what goes on at the heart of my job. I expect to write more about this soon, and maybe explore some of the details that I didn’t get into, since this post is already too long.

Read more

Due to some unplanned traveling I ended up near the Bay Area last week, more specifically Canonical was holding an internal Cloud Sprint in Oakland, CA, and Martin asked me to participate and push our agenda for the upcoming click packages upload and download services, which need to be live by October at least on its simplest form. But I’ll tell you more about that in a separate post.

What I want to share with you today is the joy of being able to connect with old friends and recollect memories, as I mentioned I was longing for in my last post. In those few days I was in California I managed to catch up with Limi and Philipp, said an en passant hi to Rob Miller at the Mozilla SF office, had dinner with Gustavo, walked around the city with Fernando, Alberto, and Geoff, ending up at an amazing Chinese restaurant pretty much by accident, paid a visit to Marlon, who took me on a guided tour of the Facebook HQ followed by lunch at The Cheesecake Factory which I couldn’t refuse. It was exausting, but really great catching up with everyone!

A recurring topic between all of us was the general issues that all of our companies (Mozilla, Canonical, Facebook) have with general public perception. Most interestingly perhaps is the similarity between Canonical and Facebook when it comes down to privacy matters, how there seems to be a disconnect between the internal and external messaging on those matters, and how much the public perception is biased by the media and the very loud minority of privacy tinfoil hat zealots. I wish I could do more to help with solving that. Perhaps pushing for more transparency, better communication at least from the technical side of things could be a way to improve that.

Tech talks aside, I was simply overwhelmed by how much my kids’ pictures and videos are popular amongst friends. Every single person that I talked to was quick to mention that as the very first thing. Oddly, that generally does not reflect in likes and comments on those Facebook posts, which is an interesting observation. Are people generally afraid of clicking that Like link or is it too much effort for them? I’m sure it would do for a great usability study.

I hope to explore a bit more on the outcome of the sprint on a later post. Suffice to say that I was really glad to be present and contribute some feedback to all the planning that’s going into the next cycle, and the opportunity to meet some old friends while at it was invaluable. Looking forward to be doing more of that in the coming months, at FISL and PythonBrasil.

As an article I’ve read yesterday mentioned, we tech heads seem to live on a bubble that mostly bounces between social networks and having post work hours drinks with colleagues, usually from the same company. I wish we could all be more social in the physical world, and talk more about things that are not so tech-related. About life, and family, and non-work things, and enjoy ourselves more.

And headed straight into the shining sun.

Read more

Ever since I got my copy of Steve Souders’ Even Faster Web Sites I’ve became obsessed with speed. During my day job I’m constantly looking for things that can be improved to make the user experience smoother, specially for first-time visitors. I’m fairly happy with what we’ve achieved in the last year, though there are always things to be improved.

Today I’m going to share with you one of the tricks that we’ve discovered almost by accident and that can help with making your website faster, if YUI is your Javascript framework of choice.

We’ve started using YUI in Landscape around March of 2009, by rewriting all of our ExtJS-based scripts (which weren’t that many) to use YUI. Since our application requires HTTPS due to potentially sensitive information, we had to self-host YUI, to avoid mixed-content warnings. Initially we had little experience with it, so we went with the easiest option available: loading the YUI seed file and letting everything else be loaded on-demand by the YUI Loader, without a combo handler (more on that later).

Soon enough we noticed that it wasn’t such a great idea. First-time access was terrible, with load times over 30 seconds in some cases, due to the combination of HTTPS connection overhead, no static resource caching and large number of requests to fetch all the resources. It was worse than that even, because most browsers either don’t cache HTTPS content by default, or only cache it on memory, unless you set the proper caching headers. To give you an idea of what it looked like, here’s what a trivial page using the ‘overlay’ module looks like in terms of loading.

No combo loading at all

No combo loading at all

It was pretty clear that this kind of performance would not be acceptable, so I started looking for options to reduce the number of requests. It was around that time that I heard about Steve Souders work on web performance, through a long time friend from the Plone community that happened to be working at Google and shared a link to Souders’ blog post on ‘@import’.

Turns out that the YUI Loader supports using a combo handler for reducing the number of requests, when loading modules on demand. What this ‘combo handler’ does is basically take a GET request with a bunch of filenames as parameters and return a single response with the respective files concatenated. Here’s an example using the combo server from YUI’s CDN.

Combo loading with a single 'Y.use()'

Combo loading with a single 'Y.use()'

The problem was that we couldn’t use YUI’s combo server on the CDN because it doesn’t support HTTPS, which was requirement for us.

In November of 2009, at UDS-Lucid in Dallas, we had our first YUI-only cross-team sprint, totalling about 20 people from Launchpad, Landscape, ISD and Ubuntu One. The main goals were to spread knowledge on YUI and improve support for IE on our shared codebase (lazr-js), but I managed to sneak in two things for the Landscape team: implementing a Python-based combo loader (served by Twisted, but it’s just a plain WSGI app) and extracting YUI module metadata, so that we could make it easier to load extension modules on-demand, the code for which ended up as part of the lazr-js project.

At this point however we realized that there’s a significant issue with using the combo loader on an application as diverse as Launchpad or Landscape: depending on how you group your ‘Y.use()’ calls, the combo URL generated by the YUI Loader can vary significantly, which means that re-use from the browser cache is less than optimal. Here’s an example of loading both the ‘anim-color’ module and the ‘overlay’ module within a page. Compare it to the previous example and notice how the parameters passed to the combo are completely different, since the second request (to load the ‘overlay’ module) only requests modules which haven’t been loaded by the first request (to load the ‘anim-color’ module).

Combo loader can be a caching problem

Combo loader can be a caching problem

In a hurry to get the situation solved ASAP, and taking a clue from our fellow developers on the Launchpad team, we went with a simpler alternative: manually combining all of YUI, including the seed file, and loading all of that in a ‘script’ tag in the document ‘head’. It turns out that the situation improved quite dramatically, but there was still more to be done. Although the number of requests was much smaller, the total page size was actually larger, since we were loading a lot of stuff that wasn’t even being used. Worse, since we were putting that ‘script’ tag in the ‘head’, it was blocking the whole page from loading until the script was finished downloading, which I after following Souders’ blog for a while I realized wasn’t such a great idea either. Here’s an example of what that looked like.

Combined modules in the head

Combined modules in the head

At this point we had a significant codebase of YUI-based modules, and so did the Launchpad team, but we had gone from solely relying on the YUI Loader to not using it at all (by just loading all the code in one big honking file).

Once we got the module metadata extraction figured out though, I resumed working on improving the first-view load time, by reducing the number of modules manually combined to the bare minimum shared by all of our pages. Also, by having a single file with the common modules being loaded on all the pages, we can be sure that it is highly cacheable. And then we let the YUI Loader kick in and load the missing modules on less-used pages, which was a really nice improvement in page weight and reducing the time to the ‘onload’ event.

At this point Souders started talking about asynchronous loading of scripts and the ‘defer’ attribute, and I got my copy of Even Faster Web Sites. I started pondering if there was something that could be done to defer the loading of the combined modules while still letting the YUI Loader do it’s job.

The problem lies in the fact that YUI asynchronously loads modules that are not on the page yet, calling a callback function (the last argument to ‘Y.use()’) once all the scripts containing the required modules have finished loading. If all the modules are already on the page, it just calls the callback right away. So by just tacking a ‘defer’ attribute on the ‘script’ tag for the combined modules there would be a race condition between the loader checking if the modules were loaded and the script with the combined modules being loaded.

Then, it struck me. If the last argument to ‘Y.use()’ is just a callback function, could we queue those functions to be called at a later time and load the combined modules ourselves, before letting the YUI loader proceed? After a little back and forth with Dav Glass and some pseudocode exchange, I got an implementation of this idea(which I’m calling the ‘Prefetch YUI Loader Hack‘). And thus we managed to move the loading of all the combined modules from a ‘script’ in the ‘head’ to use asynchronous script loading, shortening even further our time to the ‘onload’ event. Here’s an example of what it looks like.

Prefetch with a single request

Prefetch with a single request

But wait, there’s more! Souders then started talking about parallel loading, tickling my speed bug again. After thinking for a while about the problem, enlightenment came again: when an YUI module is loaded, it is not executed right away, but added to an internal registry of modules. What that means is that regardless of what other modules it depends on, they can all be loaded independently from each other! That means you can break up a manually combined file into N smaller files and have them be loaded in up to N parallel connections (where N varies by browser and by browser version) and when they are all done you can let the YUI Loader kick in. Here’s an example with two parallel downloads plus an extra module not prefetched.

Prefetch, with parallel download

Prefetch, with parallel download

So that’s where things stand now. There are still optimizations that can be done in Landscape, like using the combo loader to reduce the number of requests on less common pages, or even using fixed URLs to the combo loader with the prefetch hack, to simplify the build process. We also need to start using Caridy’s Event Binder module, since now the pages load so fast that the users start clicking around before the event handlers are in place (ha!).

I am also pushing to get those kinds of tips documented and passed around Canonical, through a project codenamed Dare2BFast, which has the goal of coming up with a set of Web Performance Guidelines that all Canonical web sites should follow, both on the frontend and backend. Stay tuned!

Read more

This morning I woke up with this song in my head. It reminds me quite a bit of the Rolling Stones, and it’s very appropriate for the mood I’m in today.

I’m sitting here at Canonical’s office in Montreal, where we are having another Landscape sprint. There’s a couple things that make this sprint special, one of them being that Frank Wierzbicki has joined us! Welcome to the team, Frank, and I hope you enjoy!

It has also coincided with the end of our 1.5.5 milestone, which rolled out yesterday, with some interesting highlights:


Contrary to what other people might tell you, YES speed does matter. We’ve been slowly (ha!) rolling out improvements to page load speed, and I’m happy to say that the improvements are very noticeable. Things we’ve done to improve this range from setting proper caching headers for static resources to heavily abusing the YUI Loader. I should write more about that on a more appropriate occasion. There’s always things to improve on, so keep an eye on it!

Faster Trials

Over the last four years, the trial registration for Landscape hasn’t been… enjoyable to say the least. This week we’ve finally rolled out the first iteration of our simplified registration process, which removes a manual approval step which used to take several weeks to be done. That means you can now register for a trial account in Landscape and start using it right away! Quite a novel concept, isn’t it? :) We’ll planning some more updates to it, so that when starting the registration process you have a better sense of where you are in the process and what are steps involved.

We have more exciting updates coming up before the end of the year, so keep an eye on it. But in the meantime, hit that ‘Registration’ link in Landscape, get it like you like it, and enjoy!

Read more

Just a quick post to get me started at blogging again.

Over the last year (wow, time flies by!) I’ve been working at Canonical, as part of the Landscape team. This is a very diverse team with lots of different skills, and somehow I found myself naturally gravitating towards working more closely on frontend-related issues, of which I could highlight writing YUI3 widgets, speeding up page loading experience and creating a nice testing infrastructure. There’s a ton of things I could write about that, and I really plan to. But today’s entry will be pretty short.

As part of a brain-break task I fixed some of our Javascript tests today so that they would run on Google Chrome. We haven’t been targeting Chrome so far, but that might change soon, driven by Google Analytics stats of people using Landscape.

But, the thing that really caught my attention was the difference in speed between Chrome and Firefox.

For comparison:

Google Chrome 5.0.307.7 beta

$ BROWSER=google-chrome ./bin/test -1vpc --layer=JsTestDriverLayer
Running tests at level 1
Running canonical.testing.javascript.JsTestDriverLayer tests:
Set up canonical.testing.javascript.JsTestDriverLayer in 1.020 seconds.

Ran 318 tests with 0 failures and 0 errors in 9.545 seconds.
Tearing down left over layers:
Tear down canonical.testing.javascript.JsTestDriverLayer in 0.366 seconds.

Firefox 3.6.3pre

$ BROWSER=firefox ./bin/test -1vpc --layer=JsTestDriverLayer
Running tests at level 1
Running canonical.testing.javascript.JsTestDriverLayer tests:
  Set up canonical.testing.javascript.JsTestDriverLayer in 1.014 seconds.
  Ran 318 tests with 0 failures and 0 errors in 15.032 seconds.
Tearing down left over layers:
  Tear down canonical.testing.javascript.JsTestDriverLayer in 0.349 seconds.

Firefox 3.7a3pre

$ BROWSER=firefox-3.7 ./bin/test -1vpc --layer=JsTestDriverLayer
Running tests at level 1
Running canonical.testing.javascript.JsTestDriverLayer tests:
  Set up canonical.testing.javascript.JsTestDriverLayer in 0.804 seconds.
  Ran 318 tests with 0 failures and 0 errors in 13.433 seconds.
Tearing down left over layers:
  Tear down canonical.testing.javascript.JsTestDriverLayer in 0.379 seconds.

Disclaimer: Both instances of Firefox were started with the “-safe-mode” flag, which disables all plugins and extensions. Also, as they say around here at Canonical: NOT A METRIC. But interesting still.

If you look closely at this post you might find some hints about things we’ve been working on and which I hope to write about, in addition to general tips and tricks about page speed optimization from experiences in both Landscape and Launchpad.

Read more

This is a really short announcement, since it’s almost 4am here in Brazil and I should actually be sleeping, instead of building installers for Plone (ha!). But hey, this is exciting enough to keep some people awake all night.

Today, Canonical announces the availability of Landscape Dedicated Server!

So what is it?

One of the many things the Landscape team at Canonical has been working on since early this year is a version of Landscape that can be run on a local network, as opposed to the hosted, Software-as-a-Service version of Landscape that is available to the general public at the moment.

Many people have left us feedback saying that this would be desirable for them, and would actually make Landscape an option in environments where data cannot leave the local network boundaries due to strict policies. So if you’re one of this people or you have evaluated Landscape in the past but decided it was not for you due to this specific reason, this is the time to give Landscape a second look!

So thanks to everyone that has submitted ideas and requests for new features. We’re listening! Even more feedback-driven features are being added monthly, free of charge for existing customers, and the user interface is being polished and fine-tuned for managing large installations. Stay tuned for more announcements!

Read more

Over the past few months, friends and family have been very curious about how my new job is going, and it’s been hard to stop for a moment and go into detail about it. I’ve been simply nodding and saying “It’s fine”.

This is an attempt at summarizing all the activity that happened in the last five months, though it’s far from being a short summary. If I had to pick a two words to describe my first five months at Canonical, it would be “Pure Awesomeness“. For a more detailed view, grab a cup of joe or your favorite other beverage and keep on reading.

Today is a special day. Exactly five months ago, on January 5th, I joined Canonical to work on the Landscape project.

It has been quite a ride so far, with two sprints with the Landscape team, AllHands and UDS in Barcelona just a week ago, and lots of excitement about the future. Saying that I’m completely stunned by the work everyone at Canonical has been putting together and how much the teams have grown in the last few months doesn’t do enough justice to it.

At AllHands and UDS we got a short preview of the things that are coming out in the next cycle and beyond. An example of that is the newly formed Design and User Experience (DUX) team which will not only be focusing on Ubuntu itself, but on many other areas across Canonical and the whole Open Source community in general.

At UDS, the DUX team had a special ‘booth’ where any person from any project could walk to them and get advice about their personal or favorite application. One person which has favorably used such advice already was Seif Lofty, from Gnome Zeitgeist. I met Seif during breakfast at the first day of UDS and I cannot describe how excited he was about simply *being* at UDS. And he was certainly twice as happy when he left.

Another person I had the joy to meet was David Siegel, of Gnome Do fame. We teamed during AllHands on the infamous “Fun In The Woods” activity, which had some people walking like zombies on the day after.

Even more importantly, I was able to meet many many more other colleagues from different teams and wrap up some loose ends from tasks that I got started during the first couple months. I’m definitely impressed by the amount of plain brilliant people that are part of Canonical as of today.

In a sense, being at a Canonical event is very much like being at a Plone conference. Everyone seems to be very receptive about new ideas and very friendly and laid back. And as a bonus, I was able to exercise my (not so secret now) power of throwing some crazy ideas around and see how they influence people. And man, I’m already impressed by the outcomes, just a week after the fact.

To me, this has been the most rewarding thing so far, to be able make big contributions not in lines of code, but in ideas that can make a concrete difference in the hands of the right people. This is something that can only be possible at a company the size Canonical is at the moment, where it’s just big enough that you can grab a mind or two to push an agenda without affecting the rest of the team and still small enough that you can influence decisions.

As my colleague Jamu would best describe, “I’m PUMPED!”. :)

But that’s not all. Software-wise, I was able to make some big contributions too. The Landscape team just finished a 6 month development cycle that brought many cool features to life. I’m really happy with that, and specially with the speed that this team can get features from the black board into reality. It’s also a much different environment than what I was used to, with very well-defined and refined processes for ensuring the overall quality of anything that is produced. One process that I’m specially enjoying is the requirement for two positive reviews before landing a branch. I hope to talk more about that soon (that is, sooner than 5 months from now *wink*).

As for my role in the team, it is quite different than what I’m used to. I’ve been focusing a lot on the UI aspects of Landscape, on ways to make things more obvious and more streamlined. I’ve been also writing a ton of Javascript, and collaborating with other teams to define better policies for Javascript testing in general. And finally, we will now have a person from the DUX team dedicated to working with us, which will push work on the Landscape UI even further.

I also had the chance of interacting with the Launchpad team, which has a much more refined process due to the size of their team. Over at Launchpad, I started a branch back in mid-December, even before starting at Canonical, to allow Launchpad to use the Chameleon Template Engine.

That was another wild ride, and during the course of this project I was able to contribute tons of fixes upstream to Malthe Borch to make Chameleon even more compatible with plain old ZPT. In fact, it is so compatible at the moment that due to the magic of z3c.ptcompat Launchpad will be able to run *both* Chameleon and ZPT with the flip of an environment variable. Even more stunning, the changes required to code were minimal, basically changing imports to use z3c.ptcompat, and in templates we’ve had to fix some non-XHTML compliant ones and remove unused i18n tags. I am happy to announce that this branch will soon be merged (it was submitted to PQM, successfully accepted and is waiting to land the buildbot queue). The bad news is that not all tests pass at the moment with Chameleon enabled, but we will be dogfooding and fixing those tests as we go. It was too much pain already to maintain a nearly 6 month old branch outside the main tree. ;)

I am really interested in many of the things that the Launchpad team is doing, process-wise. The PQM seems like a very nice idea for a bigger team like theirs, though it would probably be useful to our smaller team in Landscape too, and to others in general. Hopefully I will get a chance to explore it more and talk about it during the upcoming FISL 10, in Porto Alegre.

Lastly, but not least important, I’m also working on getting nightly builds of the Bzr Installer for Windows rolling, and a more streamlined process for the official builds. Karl Fogel, of Producing OSS fame, and our Launchpad Ombudsman is making sure I keep my promises about that, which is yet another great incentive.

All in all, there’s of course a ton of things I forgot to talk about and which happened in the last 5 months, but this post is already getting too long so I will stop right here and save some of the meat for a future one. Stay tuned!

Read more

No, this is not a blog post about the kind of landscape you’re accustomed to, though it might trigger a few ‘I want to be a Landscape Architect’ thoughts from a person or two.

The news this time is that I’m going through a landscape change myself, and to me it’s still a bit scary just to think about it.

The last such change in my life happened roughly ten years ago when I left my job as PHP Programmer and Systems Administrator at a small ISP to start my own company with a few colleagues from university. At that time, leaving PHP behind to learn this new (to me) fancy things called Zope and Python felt really weird, not to mention the fact that I was about 20 and knew no-one of my age that had successfully started a web development company (I mean, I live in a very small town, this is not Silicon Valley).

So cutting to the chase, I would like to let everybody know I will be joining the Landscape team at Canonical starting January 5th. I will not be leaving Enfold Systems though. We keep working together part time at least until April, where we expect to make at least one big release of a fully eggified Enfold Server based on Plone 3.2 in that timeframe. After April, I will still be doing work for Enfold Systems, but the time available for that will much more constrained.

Joining the Landscape team is very exciting to me, not only because I will get to work with some really smart people, but also because I will be working from home and being supervised by one of the top minds in the Brazilian Python scene: Gustavo Niemeyer, responsible for pearls like `Mocker`, `Storm` and `Smart`. And on top of that, I will get to learn new ways of managing and collaborating within a distributed software development team, a subject that was theme for my graduation thesis.

Last, but not least important, there’s a long story of back and forth between me and Canonical that dates back as far as 2004. On that year, DebConf 4 was happening in Porto Alegre which is about one hour drive from where I live.

Hanging out on the #zope3-dev irc channel I’ve noticed that one guy, Steve Alexander, connected from an IP range in Brazil. That struck me as odd because I didn’t know Steve very well, but maybe well enough to guess that something hot was going on. So Steve told me he was here for DebConf, and I self-invited me to stop by and say `hi`.

Arriving at DebConf, I met Steve personally for the first time, and again had an odd sensation there: Steve and about 30-40 folks where separated from the rest of the DebConf crowd on their own room, hacking away. I also met Mark Shuttleworth there and saw him talk about his space flight. That was one of the most exciting things and very heart-warming to me since my dream as a kid was to be an astronaut.

Later on that same day I came to know they were creating a new distro: they were taking name suggestions and voting. It was not surprise at least to me when I first saw Ubuntu being mentioned in the news, though I don’t recall clearly if that was one of the suggestions being voted.

Having just finished a big project using Zope 3 and relational databases, Steve quickly asked for my input on the preliminary design of a Zope 3-based web application for managing translations, which I promptly gave. The code I was looking at right there was the beginning of Launchpad. Up until very recently, Launchpad was powered by a creation of mine named `sqlos`, which has now been replaced by `storm`.

Steve asked me if I was interested in joining the team, but I had to decline given that I had just signed up with Enfold Systems, which was only about a month old at that time. I did put them in touch with a good friend I met only a couple years before: Christian Reis (kiko). In retrospect, that might have been one of the best contributions I made to Canonical, and to Open Source in general.

So, all in all, very exciting news for me. Canonical has a special place in my heart since I basically saw when Ubuntu and Launchpad were born and have some really good friends working there. The situation is no different with Enfold Systems. I have been working with Alan Runyan since before Enfold was Enfold, and really since before Plone was Plone: when Plone 1.0 was announced I was with Alan Runyan, Paul Everitt and Alexander Limi in Paris, at SolutionsLinux 2003, participating on a Zope 3 sprint, mind you.

To summarize, I will be splitting my time between Enfold Systems and Canonical between January 5th and April 1st 2009. After that I will be working full time at Canonical but still expect to contribute significantly at Enfold Systems. One of the responsibilities I have right now and which I don’t want to drop is building the Plone installers for Windows. Hopefully I will keep doing that. Unless some Ubuntu dude sneaks in by night and erases my Windows partition, that is. :)

Read more