Canonical Voices

What The Raving Rick talks about

Rick Spencer

Extract Class Refactor Built in QtCreator

Following up from my post about how I think about inheritance yesterday, I thought I'd do a quick post about a refactoring feature built into the QtCreator editor.

In this example, I decided I wanted to add a box that lets the user enter a name for a high score if they achieved it, and then to display all the high scores. I got started by creating an UbuntuShape with a column and sub components in main.qml, but quickly realized that I will have a lot of behavior and presentation to manage. This would be much easier to develop in it's own component (or "class" as I think of it).

So, I just right clicked in the editor and used the refactoring menu to "Move Component into Separate File." 
I got a dialog that asked for a name, I chose "HighScoreBox" and it created a new file for me, and replaced all of my QML code in main.qml with just the little bit of code needed to declare the object.

Now I am ready to properly develop the behavior for the component. Like any good refactoring tool, the code kept working. 




Read more
Rick Spencer

Gotta love the "developer art" ... those placeholder images should be replaced by sweet Zombie artwork as the game nears completion.
For a long time I resisted the QML wave. I had good reasons for doing so at the time. Essentially, compared to Python, there was not much desktop functionality that you could access without writing C++ code to wrap existing libraries and expose them to QML. I liked the idea of writing apps in javascript, but I really did not relish going back to writing C++ code. It seemed like a significant regression. C++ brings a weird set of bugs around memory management and rogue pointers. While manageable  this type of coding is just not fun and easy.

However, things change, and so did QML. Now, I am convinced and am diving into QML.
  • The base QML libraries have pretty much everything I need to write the kinds of apps that I want to write.
  • The QtCreator IDE is "just right". It has an editor with syntax highlighting and an integrated debugger (90% of what people are looking for when they ask for an IDE) and it has an integrated build/run system.
  • There are some nice re-factoring features thrown in, that make it easier to be pragmatic about good design as you are coding. I also like the automatic formatting features.
  • The QML Documentation is not quite complete, but it is systematic. I am looking forward to more samples, though, that's for sure.

In my first few experiences with QML, I was a tiny bit thrown by the "declarative" nature of QML. However, after a while, I found that my normal Object Oriented thought processes transferred quite well. The rest of this post is about how I think about coding up classes and objects in QML.

In Python, C++, and most other languages that support OO, classes inherit from other classes. JavaScript is a bit different, objects inherit from objects. While QML is really more like javascript in this way, it's easy for me to think about creating classes instead.

I will use some code from a game that I am writing as an easy example. I have written games in Python using pygame, and it turned out that a lot of the structure of those programs worked well in QML. For example, having a base class to manage essential sprite behavior, then a sub class for the "guy" that the player controls, and various subclasses for enemies and powerups.

For me, what I call a QML "baseclass" (which is just a component, like everything else in QML) has the following parts:
  1. A section of Imports - This is a typical list of libraries that you want to use in yor code. 
  2. A definition of it's "isa"/superclass/containing component - Every class is really a component, and a compnent is defined by declaring it, and nesting all of it's data and behaviors in curly brackets.
  3. Paramaterizable properties - QML does not have contructors. If you want to paraterize an object (that is configure it at run time) you do this by setting properties.
  4. Internal compotents - These are essentially private properties used within the component.
  5. Methods - These are methods that are used within the component, but are also callable from outside the component. Javascript does, actually, have syntax for supporting private methods, but I'll gloss over that for now.
In my CharacterSprite baseclass his looks like:

Imports

 import QtQuick 2.0  
import QtQuick.Particles 2.0
import QtMultimedia 5.0

Rectangle is a primative type in QML. It manages presentation on the QML surface. All the code except the imports exists within the curly braces for Rectangle.

Paramaterizable Properties

   property int currentSprite: 0;  
property int moveDistance: 10
property string spritePrefix: "";
property string dieSoundSource: "";
property string explodeParticleSource: "";
property bool dead: false;
property var killCallBack: null;

Internal Components

For readability, I removed the specifics.
   Repeater  
{
}
Audio
{
}
ParticleSystem
{
ImageParticle
{
}
Emitter
{
}
}

Methods

With implementation removed for readability.
   function init()   
{
//do some default behavior at start up
}
function takeTurn(target)
{
//move toward the target
}
function kill()
{
//hide self
//do explosion effect
//run a callback if it has been set
}

Now I can make a zombie component by creating a new file called ZombeSprite.qml and simply set some properties (and add some behavior as desired). Note that I declare this component to be a CharacterSprite instead of a Rectangle as in the CharacterSprite base class. For me, that is the essence of defining inheritance in QML.

 CharacterSprite  
{
spritePrefix: "";
dieSoundSource: "zombiedie.wav"
explodeParticleSource: "droplet.png"
Behavior on x { SmoothedAnimation{ velocity:20}}
Behavior on y { SmoothedAnimation{ velocity:20}}
height: 20
width: 20
}

I can similarly make a GuySprite for the sprite that the player controls. Note that
I added a  function to Guy.qml becaues the guy can teleport, but other sprites can't.
I can call the kill() function in the collideWithZombie() function because it was inherited from the CharacterSprite baseclass. I could choose to override kill() instead by simply redefining it here.
 CharacterSprite   
{
id: guy
Behavior on x { id: xbehavoir; SmoothedAnimation{ velocity:30}}
Behavior on y { id: ybehavoir; SmoothedAnimation{ velocity:30}}
spritePrefix: "guy";
dieSoundSource: "zombiedie.wav"
explodeParticleSource: "droplet.png"
moveDistance: 15
height: 20;
width: 20;
function teleportTo(x,y)
{
xbehavoir.enabled = false;
ybehavoir.enabled = false;
guy.visible = false;
guy.x = x;
guy.y = y;
xbehavoir.enabled = true;
ybehavoir.enabled = true;
guy.visible = true;
}
function collideWithZombie()
{
kill();
}
}

So now I can set up the guy easily in the main qml scene just by setting connecting up some top level properties:
   Guy {  
id: guy;
killCallBack: gameOver;
x: root.width/2;
y: root.height/2;
}

Read more
Rick Spencer

Dell Vostro without Windows Tax

I guess today is "Cyber Monday" or something like that? I'm looking for a reasonably priced laptop for my 13 year old daughter and saw that I had an email from Dell with some offers in it. After a little clicking around to compare different laptops, I ended up on a comparison page for the Vostro line. Naturally, I was drawn to the $299 priced laptop, when I noticed that there are 2 computers that differ only in the OS pre-installed, AND $70! This is the first time I've just naturally stumbled upon such a blatant exposure of the Windows tax.


Read more
Rick Spencer

The Road to 14.04




Copenhagen was quite a UDS. I found it to be very "tight" and well organized. The sessions seemed to be extra productive  I think compressing UDS down to four days helped us be more focused, and also less tired by the end. Maybe next one should be three days? Copenhagen was also great for the content. Working on getting the desktop running on the Nexus 7 was very interesting and fun. Also, we made a lot of progress in terms of how we make Ubuntu. I think this will be an unusually fun cycle thanks to some of the changes to our development process.
Sleestacks are monsters bent on keeping you in the past

One of the great things about my job is that I get to talk to so many people about their vision for Ubuntu. As you can imagine, I run into a lot of variety there. However, there is a certain shared vision that has come together. I think we all look forward to 14.04, we can mostly agree about what we want that release to look like.

Seeing the future through conversations with community members, developers, users, etc....
The 14.04 Vision
Imagine that you are running Ubuntu 14.04. What will your experience be like?

  • Robust
  • Fresh
  • Easy to Make
  • Ubiquitous

First, the quality will impeccable  You will not even think about up-time  The system will work close to perfectly. You will eagerly take updates and upgrades knowing that they will only make your system better. Applications will run smoothly and won't cause any system-wide problems.
Secondly, you will be able to get the freshest apps that you want on you client machines, and the freshest workloads on you servers. As a developer, you'll be able to deliver your applications directly to end users, and be able to easily update those applications.
Thirdly, will have an extremely efficient release process, one that also inspires confidence in developers. Good changes will reach end users quickly, while mistakes are easily caught and corrected well before users are exposed to them.
Finally, by 14.04 Ubuntu will run everywhere. The same Ubuntu will be on you phone, your tablet, your netbook, your laptop, your workstation, you cloud server hosts, and the instances powering workloads in your public and private clouds. The same product with the same engineering running everywhere. A simpler world with Free code everywhere.

How Do We Get There?

"It's more important to know where you are going than to get their quickly"
We have laid out the steps necessary to achieve this vision. We intend to make this real! In fact, we've already achieved some of the things necessary to get to where we want to where we want to be for 14.04. Overall, by 14.04 we need to:

  • Assure quality at every step of the development process
  • Improve application sand-boxing
  • Simplify the release schedule
  • Implement continuous integration in the Ubuntu development process
  • By 14.04 expand Ubuntu to include mobile form factors, such as phones and tablets

Assured Quality at Every Step

Leaping through time ensuring everything stays on track
In 13.04 we will change our full testing cadence from testing at each Alpha or Beta milestone, to doing full test runs every 2 weeks. This is about a 3 times increase in the rate of manual community testing. Furthermore, we will test more broadly, more deeply, and more rigorously, so that we will have a more complete view of the quality of Ubuntu during the development release.
We will also be leveraging some previous work to create a GUI testing tool. We call this tool "Autopilot" and it is designed to drive all the components of Unity in a testing environment. In 13.04 we will see expanded usage of this tool, and critically, dramatically more tests written. We will then be able to catch regressions in the Ubuntu user experience earlier, and ensure that fewer regressions make their way into the development release.
The one and only Martin Pitt has implemented a new test harness along with tests for GNOME. In this way, the Canonical QA labs will be able to identify regressions in GNOME as soon as they are introduced. This should allow GNOME developers to more quickly spot and fix problems, raising the overall quality of the GNOME code and improving the velocity of the GNOME developers.
Finally, after 13.04 ships, we will start doing updates in a new way. After 13.04 is a stable release, updates to that release will not be delivered to all users when available. Rather, updates will go out to a small number of users, and the system will automatically monitory whoopsie-daisy to ensure that users aren't experiencing issues due to the update before releasing the update to yet more users. We call this "phased updates".

Ubuntu Continuous Integration

Every day is like the day before, Daily Quality
There are 2 new things that we are doing in 13.04 to get us to the world of 14.04 where releases are easy and confidence inducing. First, Colin Watson has set up the Ubuntu build system so that all packages are built in a staging area (by reusing the proposed pocket for the development release). Only when a package is built succesffull along with all of it's dependencies are the packages copied into the release pocket and go out to the wider development release. This means that there will be no more breakages due to out of sync packages when you update. Compiz and Nux will always be built together before they are copied over. The whole xorg stack too.
Building things in proposed provides an opportunity to assure the quality of the packages before they go into the release pocket. This will be accomplished with auto-package testing. Essentially, tests that come with a package will be run when the package is built. Additionally any package that depends on the new package will also run it's auto-package tests. The package will only be copied into the release pocket when all of the tests pass!

Start Application Insulation

Protecting the world from pure evil

By 14.04 we expect most applications to be run in a secure manner, so that poorly written or even malicious applications will have limited opportunity to do damage to a users system. In 13.04 the Security Team is moving ahead with lots of work to enable App Armour throughout Ubuntu, in addition to isolating some common infrastructure in use today, such as online accounts, gnome-keyring, and even dbus.
In this way, applications will be able to run and access only the small subset of system that is relevant to them. When a user installs an application it will come with an AppArmour profile that will ensure that the kernel can insulate the system from the application appropriately. The fruits of this labor should be widely visible by 13.10.

Simplified Releases

A time turner literally creates more time
Ubuntu has traditionally held a serious of Alphas and Betas. These had the purpose of ensuring that we had an installable image at least a few times during the release, and to provide an opportunity to do some wide testing of the system. This meant that several times throughout the release cycle we would stop development on Ubuntu, freeze the archive and roll a release.
Since the advent of daily quality, Ubuntu can install pretty much every day. Furthermore, we are opting for much more frequent testing than the milestones allowed. Therefore, the Alphas and Betas have limited utility, but would have continued to sap our development velocity. So, in 13.04, Ubuntu is making the bold move of skipping all Alphas, and having just a single Beta! This also allowed us to extend certain freezes, especially Feature Freeze. The new schedule has a much more time for finishing features and fixing bugs, and much less time in freezes.






Read more
Rick Spencer

Let's Roll with 12.10


As a consequence of our daily quality efforts, some very interesting developments have taken place for 12.10.

First, while knocking around UDS, it occured to me in a bit of a flash that all of the effort that we invest in freezing the archives to make Alphas and Beta releases for Ubuntu is wasted work that slows down our velocity. We have daily quality and we have started using -proposed in the development release, so the chance of having an uninstallable image is greatly reduced.

(you can read the discussion on @ubuntu-devel)

So, why do we have Alphas and Betas? After some discussion, it seems to come down to:

  1. Because we want to encourage widespread testing by community members on a variety of hardware at a regular cadence
  2. Because we want targets for features and bug fixes
  3. Because we need to test our ISO production capabilities
  4. Because we always had them

Does all the effort in freezing the archive actually help? I don't think so. In fact, I think it is counter-productive.

  1. We can do the same testing with daily images. Furthermore, we can do that testing at a cadence of our liking, or even out of cadence if we want to squeeze in a special test run at some point. The ISO tracker nicely accomedates this now.
  2. Freezing the archive, by definition, *stops* packages and therefore bug fixes and features from getting uploaded. 
  3. Surely we don't need to slow down everyone's work so that we can try producing ISOs, and surely we don't need to do it so often and early.
  4. Of course, "because we always did" is not much of a reason.

It seems that what is needed is a regular cadence of deep and broad testing by the community to augment our automated tests, along with trial runs to ensure that our ISO building tools and process are working. Thefore, I propose we:

  1. Stop with the alphas and betas and win back all of the development effort
  2. Increase the cadence of "ISO testing" to whatever we want or whatever the community team can manage
  3. Spin a trial ISO near what is not beta time
  4. Spin ISOs for release candidates

Read more
Rick Spencer

«Bonjour Le Monde»


Pardonnez-mois pour le massacre d'une belle langue ...

Je rêvais de donner un cours de la progrommation en français. Ce cours aurait deux objectifs. Le premier objectif est l'introduction du monde de la programmation. Le secondaire objectif est la pratique au français pour mois. Les etudes apprennent la programmation, et j'apprends à parler plus français.

Donc, je vous présente:
La Programmation pour Les Debutantes Absolus, En Mauvais Français

Je vais donner le cours dues fois par semmainne, dans la soir. Aussi, je vais avoir les heures de beaureau pour les questions et pour les discussions.

Nous allons utiliser Apprendre à programmer avec Python. Je vais couvre duex chaptre chaque classe, mais nous allons arrêter en chapter 12 ou avant. Donc, le cours sera en l'environ de tois semmaine.

Nous allons utiliser Google Hangouts pour les classes. Donc, c'est neccesaire d'avoir un webcam et microphone, mais le cours est gratuit. Aussi, le cours utilise ubuntu, naturalement ;)

En fin, j'ai besoin trouver les étudiants. Si vous voulez commencer à apprendre la programmation (et m'aidez avec français), vous pouvez laisser un comment ici, ou vous pouvez me trouver dans irc (rickspencer3 sur freendoe), ou m'envoyer un email. Après je trouve les ètudiants, nous allons trouver les bons heures pour les classes.

Read more
Rick Spencer


I have become quite fascinated by using HTML5 for rendering my GUIs on my Ubuntu applications. I love doing this, because I can continue to use Python as my library and desktop integration point, while being free to use cutting edge presentation technology.

I sat down with didrocks yesterday and we set off to create a simple Quickly template out of some of the code I've written for bootstrapping these projects. The template will be very very simple. All it will do is set up communication between HTML and Javascript in an GtkWebKit window, and a Python back end. Developers will be free to choose how to use the WebKit window. For example, they could use JQuery or a host of other javascript libraries if they chose.

Didrocks was adamant that we should expose the excellent debugger (called The Inspector) that comes with WebKit in the template. However, I have found that for GtkWebKit, the doucmentation is sketchy (at best), and the API is unpredictable in it's behavior. So, it took us 2 hours of experimentation and trolling source code to make an implementation that actually worked for showing The Inspector.

So, if you have been trolling the web looking for how to make this work .. I hope this works for you! Without further ado, here is a commented minimial example of a WebKit window that shows the Inspector. I also pushed a branch with just the code in case you find that easier to read or work with.

 from gi.repository import WebKit  
from gi.repository import Gtk
import os
#The activate_inspector function gets called when the user
#activates the inspector. The splitter is a Gtk.Splitter and is user
#data that I passed in when I connected the signals below.
#The important work to be done is to create a new WebView
#and return it in the function. WebKit will use this new View
#for displaying The Inspector. Along the way, we need to add
# the view to the splitter
def activate_inspector(inspector, target_view, splitter):
inspector_view = WebKit.WebView()
splitter.add2(inspector_view)
return inspector_view
#create the container widgets
window = Gtk.Window()
window.set_size_request(400,300)
window.connect("destroy",Gtk.main_quit)
splitter = Gtk.Paned(orientation=Gtk.Orientation.VERTICAL)
window.add(splitter)
#create the WebView
view = WebKit.WebView()
#Use set_property to turn on enable-developer-extras. This will
#cause "Inspect Element" to be added to the WebKit's context menu.
#Do not use view.get_settings().enable_developer_extras = True,
#this does not work. Only using "set_property" works.
view.get_settings().set_property("enable-developer-extras",True)
#Get the inspector and wire the activate_inspector function.
#Pass the splitter as user data so the callback function has
#a place to add the Inspector to the GUI.
inspector = view.get_inspector()
inspector.connect("inspect-web-view",activate_inspector, splitter)
#make a scroller pane to host the main WebView
sw = Gtk.ScrolledWindow()
sw.add(view)
splitter.add1(sw)
#put something in the WebView
html_string = "<HTML><HEAD></HEAD><BODY>Hello World</BODY></HTML>"
root_web_dir = os.path.dirname(os.path.dirname(__file__))
root_web_dir = "file://%s/" % root_web_dir
view.load_html_string(html_string, root_web_dir)
#show the window and run the program
window.show_all()
Gtk.main()

Read more
Rick Spencer

Thanks mterry! (Quickly Tutorial Updated) :)

So I decided I had to bit the bullet this morning and update the ubuntu-application tutorial for Quickly, since desktopcouch is no longer supporter and I therefore removed CouchGrid from quickly.widgets. So I start looking through the tutorial to make notes about what I need to change, and I find everything already fixed by Michael Terry! Amazing. I love working (or in this case not working) on open source projects ;)

Read more
Rick Spencer

12.04 Quality Engineering Retrospective

Ubuntu 12.04 LTS Development Release (Precise Pangolin) Beta 2 is (most likely) going to be released today. This means we are getting quite close to final release! I have been running Precise as my only OS on both of my computers for months now, and it is far and away my favorite desktop I've ever used. It is beautiful, fast, and robust. This post is about the robust part.

After we release Beta 2, we should continue see the Ubuntu and Ubuntu Server improving day by day and to quickly achieve release quality. I have asked Kate Stewart, our release manager, to do everything in her power to ensure that starting with Final Freeze on April 12th each daily image is high enough quality that it could be our final release.

Why am I so confident that Ubuntu will only get better and better? Because Ubuntu stayed of usable quality throughout the development cycle. This created a virtuous cycle, where it was easier to develop and test with, so was then easier to maintain the quality.

After the last UDS, I described how we planned to maintain quality throughout the release. We followed those plans, and got the expected results. I am very very proud of

But rather than repeating the activities that we did, I thought I would look back and see what values arose from those practices. I think it was the following values that really had the impact, and that we should build on for 12.10 and beyond.
  1. Verify and fix before landing major changes in Ubuntu
  2. Not waiting when something breaks to take action
  3. Test for testability, then test rigorously
Let me provide some specific example for each.

Verify And Fix Before Landing

Previously, teams would rush to meet certain development milestones, with the goal of meeting the letter of the law. A package had to be uploaded before Feature Freeze, for example, so a team would just push what they had, even if it was not proven to work, or even know not to work at all!

In 12.04 we took a different approach with packages that tended to have significant impact on the usability of Ubuntu, or that were otherwise important. In fact, the xorg team has been following this approach for many releases, using their "edgers" PPA and calls for testing. For many releases new versions fof X were vetted by a community of dedicated community testers before being uploaded to the development version of Ubuntu. In 12.04, they took this even a step further. In previous releases, while different parts of the X stack were building, user of the development release might upgrade while the archive was in an inconsistent state, because different parts of the new X stack were built while others were still builiding or waiting to build. This could result in situations where X was uninstalled altoghter! In 12.04, the X team actually built the X stack separetly, and then copied the binaries into the archive. Totally verified and fixed before landing!


Many folks have noted the dramatically increased robustness of Unity during the 12.04 development cycle. The Unity team did a lot of work to tune and improve their development processes. This included using a PPA for each new release of Unity, and then having that release rigorously tested (with test cases, a testing tool, etc...) by community members with different kinds of graphics hardware and other setups. Then regressions and problems were fixed in the PPA, testing repeated, and only then being uploaded to the development release.


Ultimately, though, I think Ubuntu Cloud must take the prize for rigor in this area, with their OpenStack testing and verification process. On each and every commit to OpenStack uptream, OpenStack gets deployed to a Canonical Cloud test bed (deployed with Juju, of course), then a full suite of tests run. If the tests pass, it gets automatically built into a PPA. When the team is ready to do a release into Ubuntu, they can make the many necessary tweaks in the PPA before uploading it to Ubuntu. This level of Precision allowed the Server team to stay with cutting edge OpenStack, while maintaning a system that was always working, and therefore testable.

Don't Wait when Something Breaks


This value has really taken hold in the Ubuntu community, and it has really helped. There are 2 areas that I monitor each morning. First, I check how the arcvhices look. I can do this because the Plus One Maintenance team, led by Colin Watson and Martin Pitt in turns, have written a tool that finds problems in he archives. Furthermore, each morning they strive to fix those problems. In this way, uninstallable packages and other problems are fixed before we try to spin that day's daily ISO.


After spinning the daily ISO the QA team runs a set of smoke tests on them. If the tests can run, or fail, the right engineering teams are notified, and either they try to fix the tests, or fix the test failure so we can try spinning the CD again. The daily response meant that it was pretty certain that issues were introduced in the last 24 hours, which in turn made them easier and faster to resolve.

Still, Ubuntu development is incredibly rapid. We didn't want to set up a situation where people were afraid to make changes because they might break something. Therefore, from the beginning of the cycle, we accepted that our testing would not catch everything, and that some things would break. So, we set the goal of quickly reverting changes that caused the development release to be hard to test or use. We only had to resort to this a few times. For example, at one point, LightDM was not able to load any but the default desktop. As a result, it was not possible to use desktops like Kubuntu, Xubuntu, etc... The change was reverted the same day so that testing could continue.

Test for Testability, then Test Rigorously

So, we now have automated testing of Canonical upstream code, as well as daily images and daily upgrade testing. However, we don't consider this the end of the testing process, but the beginning. In other words, we use the automated tests to tell us if the code trunks and images are worth testing harder.


In 12.04 development, we evolved our community testing practices to meet this needs. In the past we would do a "call for testing" which mean "please update and try out Ubuntu, let me notice if anything broke". In 12.04 a "call for testing" changed to include test cases so that we could know what worked, not just what broke, coverage of hardware and configurations by recruiting community members who had the right setups, and organized results.

This thought process was not limited to our only Canonical produced code, however. Before or soon after introducing potentially distruptive changes Nicholas Skaggs, our new Community Team member, collects test cases from the relevant developers, and than organizes community members to execute those test cases. He is also organizing these tests at important milestones, such as Beta 1 and now Beta 2.

Read more
Rick Spencer

We are getting closer to release! Beta2 freeze is tomorrow. Quality in 12.04 is looking very good today. However, we still see hundreds of bugs get fixed across desktop and server between now and April 26th. In the past, I've found that in the flury of activity it's easy to lose track of the most important bugs in all that noise, and then some scrambling ensues.

To counteract this, at least for myself, I had a couple of calls, with Jason Warner (Desktop Engineering Manager), Robbie Williamson (Server Engineering Manager), and Steve Langasek (Foundations Engineering Manager). We talked about what bugs we had (that we know about now) that would actually keep us from releasing as scheduled. We have a term called "release blocking bug", but in point of fact, almost none of them would actually keep us from releasing. The kinds of bugs that would truly make us slip a ubuntu release are ones that cause problems with existing OSs in multi-boot situations, serious bugs in the installer, serious bugs in update manager, bugs that result in a loss of networking, etc... Bugs that can reasonably fixed in an update do not block the release.

We decided that the best way to keep track of the very few bugs like this is to continue to track them as normal, but to set their importance as critical.

There is another set of bugs that I also ask the team to focus on. This set is more aspirational. I want us to fix all of the upgrade bugs that we find from automated testing, or at least all of the High and Critical importance ones. I would sincerely love to see every upgrade go smoothly for all of the millions of people who will be upgrading to Precise.

So, when I am going to start talking about pychart? Right now, in fact! Keeping tabs of bugs is boring, so must be automated, and I love automating things with Python. So, I wrote a program that scrapes the data from those 2 pages, store the info in a sqlite database, and generate a line graph each time I run it.

You can see all the code here if you want, but I doubt you do, it's pretty hacky. But, it was fun to bring together the ecellent json, HTMLParser, sqlite3, and pychart libraries.

Here's the pychart money shot:
        xaxis= axis.X(label=_("Date"), tic_interval=1,format = format_date)
yaxis = axis.Y(tic_interval = 2, label="Open Bugs")
ar = area.T(x_axis=xaxis, y_axis=yaxis, y_range=(0,None))
plot = line_plot.T(label="upgrade", data=graph_list, ycol=1, tick_mark=tick_mark.star)
plot2 = line_plot.T(label="blockers", data=graph_list, ycol=2, tick_mark=tick_mark.square)
ar.add_plot(plot, plot2)

can = canvas.init("/home/rick/Documents/bugs.png","png")
print ar.draw(can)
self.ui.image1.set_from_file("/home/rick/Documents/bugs.png")
def format_date(ordinal):
d = datetime.date.fromordinal(int(ordinal))
return "/a60{}" + d.strftime("%b %d, %y")

Read more
Rick Spencer

GObject Introspection Prompts

Dang, I hate how I often but "GIO" instead of "GOI".

Anyway, I'm starting a week of focusing on coding. Unfortunately I have a bunch of meetings that I cannot escape, but otherwise, I cancelled all non-essential meetings, and will be diving into the platform and working with the real application developer experience on Precise. Also, I have a few work items that I should really take care of.

Today, I started with a bite-sized morsel. I update quickly.prompts to use gobject introspection. The key value here being that you can now use quickly.prompts with a modern Quickly app.

The branch is waiting to be reviewed and merged here.

Read more
Rick Spencer

Girrrr: PyGame + Gtk in a GOI World

Back in August, I wrote a bit about how to embed PyGame into a pygtk app (and why it might be interesting to do that). Well, the world has moved on a bit, so today I updated the code sample to work with GObject Introspection.


It wasn't too hard to do, but did take a bit of digging around. I created a diff between the files and then commented on the diff, so you can see the required changes.


 === modified file 'game.py'
--- game.py 2011-08-25 12:14:00 +0000
+++ game.py 2012-02-08 10:22:50 +0000
@@ -1,41 +1,41 @@
import pygame
import os
#you can't import Gtk and GObject in the old way
#so delete these imports
-import gobject
-import gtk
#I haven't made quickly prompts work with introspection yet
#I think it will be easy, but in the meantime, we can't use
#quickly.widgets or quickly.prompts
-from quickly import prompts
#here's how to import GObject and Gtk
#you have to import GdkX11 or you can't get a widget's xid
+from gi.repository import GObject
+from gi.repository import Gtk
+from gi.repository import GdkX11
#"gtk" has to be changed to "Gtk" everywhere
#I used find and replace for this
-class GameWindow(gtk.Window):
+class GameWindow(Gtk.Window):
def __init__(self):
- gtk.Window.__init__(self)
- vbox = gtk.VBox(False, 2)
+ Gtk.Window.__init__(self)
+ vbox = Gtk.VBox(False, 2)
vbox.show()
self.add(vbox)
#create the menu
- file_menu = gtk.Menu()
+ file_menu = Gtk.Menu()
- accel_group = gtk.AccelGroup()
+ accel_group = Gtk.AccelGroup()
self.add_accel_group(accel_group)
- dialog_item = gtk.MenuItem()
+ dialog_item = Gtk.MenuItem()
dialog_item.set_label("Dialog")
dialog_item.show()
dialog_item.connect("activate",self.show_dialog)
file_menu.append(dialog_item)
dialog_item.show()
- quit_item = gtk.MenuItem()
+ quit_item = Gtk.MenuItem()
quit_item.set_label("Quit")
quit_item.show()
quit_item.connect("activate",self.quit)
file_menu.append(quit_item)
quit_item.show()
- menu_bar = gtk.MenuBar()
+ menu_bar = Gtk.MenuBar()
vbox.pack_start(menu_bar, False, False, 0)
menu_bar.show()
- file_item = gtk.MenuItem()
+ file_item = Gtk.MenuItem()
file_item.set_label("_File")
file_item.set_use_underline(True)
file_item.show()
@@ -44,10 +44,10 @@
menu_bar.append(file_item)
#create the drawing area
- da = gtk.DrawingArea()
+ da = Gtk.DrawingArea()
da.set_size_request(300,300)
da.show()
- vbox.pack_end(da)
#pygtk didn't require all of hte arguments for packing
#but Gtk does, so you have to add all the arguments to pack_end here
+ vbox.pack_end(da, False, False, 0)
da.connect("realize",self._realized)
#set up the pygame objects
@@ -70,7 +70,15 @@
self.y += 5
def show_dialog(self, widget, data=None):
- prompts.info("A Pygtk Dialog", "See it works easy")
+ #prompts.info("A Pygtk Dialog", "See it works easy")
#I just hand crafted a dialog until I can get quickly.prompts ported
+ title = "PyGame embedded in Gtk Example"
#a lot of the constants work differently
#gtk.DIALOG_MODAL -> Gtk.DialogFlags.Modal
#gtk.RESPONSE_OK -> Gtk.ResponseType.OK
#There's some info here to get started:
#http://live.gnome.org/PyGObject/IntrospectionPorting
#but I found that I had to poke around with ipython a bit to get it right
+ dialog = Gtk.Dialog(title, None, Gtk.DialogFlags.MODAL,(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OK, Gtk.ResponseType.OK))
+ content_area = dialog.get_content_area()
+ label = Gtk.Label("See, it still works")
+ label.show()
+ content_area.add(label)
+ response = dialog.run()
+ dialog.destroy()
def quit(self, widget, data=None):
self.destroy()
@@ -87,14 +95,14 @@
return True
def _realized(self, widget, data=None):
#since I imported GdkX11, I can get the xid
#but note that the properties are now function calls
- os.putenv('SDL_WINDOWID', str(widget.window.xid))
+ os.putenv('SDL_WINDOWID', str(widget.get_window().get_xid()))
pygame.init()
pygame.display.set_mode((300, 300), 0, 0)
self.screen = pygame.display.get_surface()
- gobject.timeout_add(200, self.draw)
+ GObject.timeout_add(200, self.draw)
if __name__ == "__main__":
window = GameWindow()
- window.connect("destroy",gtk.main_quit)
+ window.connect("destroy",Gtk.main_quit)
window.show()
- gtk.main()
+ Gtk.main()
I pushed the example to launchpad, in case you want to see the whole thing in context.

Read more
Rick Spencer

Bit of fun with JQuery and CSS

I stole some time to play a bit more with veritas and JQuery today. Instead of the ugly list that I had before, I wanted some interactivity. So I got started by adding a little css to make a "card" for each bottle.

 .bottle
{
width:300px;
background-color:rgba(0,0,0,.5);
}
Then I wrote a bit of javascript to make each div that I pass into the html into a JQuery "draggable", and do a bit of cheap layout.

    else if(signal == "add_bottle")
{
div = jQuery(data,{}).draggable();
div.css('position','absolute');
div.css('left', lft);
div.css('top',tp);
lft += 10;
tp += 10;
$( "#bottle_div" ).append(div);
}
Next I'll add some nicer layout. Then I'll start adding filters and dropdowns so I can sort and do other fun stuff.

Read more
Rick Spencer

In Vino JQuery, not a Socratic, dialogs

I spent a bit of today adding the capability to Veratas to collect user input in the form of a "dialog". I put "dialog" in quotes, because I used a JQuery dialog within the HTML, rather than a Gtk dialog window.

Before settling on JQuery for this project, I looked at it and YUI in some depth. I was attracted to YUI because it seems very very complete. In fact, it has a filterable and sortable data grid, which is very important to me, as most applications, when you get down to it, are really just CRUD apps.

However, I went with JQuery for Veritas because the samples and tutorials made it seem very easy to get things done, and Veritas has simple needs.

JQuery has a cool page where you can create just the javascript that you need, as well as a theme generator. Note the "grapey" dialog bar in the screenshot, I set that color in the theme generator.

What the Dialog Does
First thing was to lay out the dialog in the normal HTML way. Note that I set it to be display:none, by default.

  <div id="dialog" title="Enter New Bottle" style="display:none;width=00px">
<fieldset>
<p>
<label for="country">Country</label>
<input type="text" name="country" id="country" value="" placeholder="">
</p>
<p>
<label for="region">Region</label>
<input type="text" name="region" id="region" value="" placeholder="">
</p>
<p>
<label for="domain">Domain</label>
<input type="text" name="domain" id="domain" value="" placeholder="">
</p>
<p>
<label for="grapes">Grape(s)</label>
<input type="text" name="grapes" id="grapes" value="" placeholder="">
</p>
<p>
<label for="price">Price</label>
<input type="number" name="price" id="price" value="" placeholder="$">
</p>
<p>
<label for="rating">Rating</label>
<select name="rating" id="rating" value="" placeholder="">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
<option value="6">6</option>
<option value="7">7</option>
<option value="8">8</option>
<option value="9">9</option>
<option value="10">10</option>
</select>
</p>
<p>
<label for="taste">Taste</label>
<input type="text" name="taste" id="taste" value="" placeholder="">
</p>
<p>
<label for="image">Label Picture</label>
<input type="text" name="image" id="image" value="" placeholder="">
<button id="preview_button">Preview</button>
<img id="preview_image" src=""/>
</p>
<p>
<button id="submit_new">OK</button>
</P>
</fieldset>
</div>

Then, I created a "New" button, and wired it up to some code that I was able to get from the excellent JQuery demo pages to display the dialog. Note that the documentation made it really easy to copy and paste my way to success here, including figuring out how to choose different reveal effects and such.

        $( "#dialog" ).dialog({
autoOpen: false,
width: 600,
show: "blind",
hide: "blind"
});
$( "#new_button" ).click(function() {
$( "#dialog" ).dialog( "open" );
return false;
});
The dialog includes a submit button. I wired this up to create a JSON object and send a signal with all this data to the backend using the "send_message" javascript function.
        $( "#submit_new" ).click(function() {
bottle = {"country": + $( "#country" ).val(),
"region": $( "#region" ).val(),
"domain": $( "#domain" ).val(),
"grapes": $( "#grapes" ).val(),
"price": $( "#price" ).val(),
"rating": $( "#rating" ).val(),
"taste": $( "#taste" ).val(),
"image": $( "#image" ).val()};
send_message("new_wine", bottle);
$( "#dialog" ).dialog( "close" );
return false;
});

Yesterday, send_signal took 2 strings: the name of the signal and some other data. Today I changed it to take the name of the signal, and any javascript object. The function uses a popular JSON parcer to stringify the javascript object before using the "set title hack" to pass the data tot he back end.
function send_message(signal_name, data)
{
title = document.getElementsByTagName("title")[0];
message = {"signal": signal_name,"data":data};
title.innerHTML = JSON.stringify(message);
}
Now that I have written that part, I don't have to worry about formatting my data, I can just pass it over.

Similarly, on the back end, I made the HTMLWindow class decode the json and pass it along:

def _on_html_message(self, view, frame, title):
if title != "null":
try:
message = json.loads(title)
except Exception, inst:
print inst
message = {"signal":"error","data":"signal not parsed"}
else:
message = None
self.on_html_message(message["signal"],message["data"])

def on_html_message(self,message):
pass

As a result, subclasses like VeritasWidnow can just use the data without worrying about the implementation. It doesn't do anything with the data yet.

2 Way Communication
I did add one bit of round tripping. It turns out that as a security precaution, the "file" input type does not let the javascript see the full path selected, it only allows the selected file to be uploaded to the server. I hope that I can figure out how to let the user grant Veritas permissions to pass the selected file to the javascript, but I can hack around it if it doesn't turn out to be easy or possible.

Meantime, I let the user type in a full path to the file, and then click "Preview". This takes the entered string, and sends it to the back end.

$( "#preview_button" ).click(function() {
send_message("image_preview", $( "#image" ).val() );
return false;
});


The back end then uses the awesome PIL libary to make a thumbnail, and then passes the path of the thumbnail back. I actully suspect that I will be able to skip the step of saving the file and just use the string data, possible with the Canvas element.


def on_html_message(self, signal_name, data):
if signal_name == "image_preview":
try:
img = Image.open(data)
img.thumbnail((128,128), Image.ANTIALIAS)
path = """/home/rick/.tmp/thumbnail.jpg"""
img.save(path,"JPEG")
path = "file://" + path
self.view.execute_script("receive_signal('set_preview','" + path + "');")
except Exception, inst:
print inst.message
self.view.execute_script("""receive_signal('set_preview_error','Could not find a valid image at %s');""" % data)
Debugging HTML/javascript
Another handy think I found today is that I can load the HTML page into Firefox, and use a web console to poke at it. Very handy. Of course, this works now because I am not doing string replacement, but I think that I can actually make a similar thing work with a WebKit window.

Next
So now that I can collect the info from the user, I'll start saving the data in a sqlite database, and then work on presenting the data to the user.

Read more
Rick Spencer

In Vino Veritas and HTML5 Client Apps


So, basically, not to put to fine a point on it, I've started to write apps for Ubuntu in a different way, essentially, replacing Gtk (or really PyGtk) with HTML5. This is my first post about how am I doing it. I've just started a project called "Veritas" which will be a wine tasting database for my wife and I. We'll be able to enter information about each bottle that we drink, and then look at trends over time, perhaps helping us pick nicer and nicer bottles as we go.

First, though, what happened, I thought you got along great with Gtk?
Well, I do still have a soft spot in my heart for pygtk. Believe me, I've written plenty of code in it. I know the ins and outs pretty well, and I'm able to do things with it like write a response UI that doesn't block to much during run longing processes and such. PyGtk is great for building "boxy" apps, but I think a lot of people want to build slicker apps than Gtk is really designed for, or at least design them in different ways thatn Gtk supports well.

Why not Qt and QML?
This app, in fact, would be well suited for a QML app. However, I have other apps in mind, and I found QML/Qt to not be quite up to the job. For instance, I want to write a communication app to combine OpenLDAP and IRC functionality. Currently, there are no Qt libraries for LDAP or IRC, so to write such an app with QML, I'd have to write C++ Qt code to wrap whatever C libraries, and then write code to export models from that C++ code to expose it the right way in QML. That is a lot of overhead, especially considering that there are good Python libraries for LDAP, IRC, and pretty much anything desired. So, I designed myself a system that let me stick with Python for the back end code.

Also, QML lacks a widget toolkit at the moment, so there would be a lot of manual coding of things like buttons and such.

Why HTML5?
I chose HTML 5 for the widget toolkit for a few reasons.
  • I already know HTML/CSS/Javascript pretty well, and I know that cool things can be done with it. I bet a lot you all know it pretty well too.
  • Webkit is very well supported Open Source used and maintained by many large companies.
  • There are lots of cool widget toolkits to choose from, I'm currently looking at YUI since I think it's in pretty heavy use by some of the web teams at Canonical.
  • Because I wanted to try out HTML for client programming.
My Application Architecture
First, I laid everything out total flat to start with. This is because I wanted to come to grips with making the view code talk to the model/controller code without mucking with any extra complexity. Of course, I will need to modify the layout as the actual code grows.

Currently, I am only focused on makinga client application programming system, though I may, in the future, extend the system so the model/controller back end could be on a server, and the view available via a browser. But this is firmly out of scope right now. I am, however, trying to be cognizant of making the system essentially portable by sequestering the Gtk specific code into specific files that can be replaced if I want to run it without Gtk at some point.

Therefore, there are some important differences to note if you are used to web programming.
  • The back and and view code communicate via signals to each other. This is much different than web programming, where the view makes a request and waits for the server to respond with a string (for Ajax apps) or redirects to another view passing some state along with it.
  • This means that the back end can send signals to the view. The view does not need to pole to see if the back end is ready, for example, the back end can just send a signal when it is.
  • This also means that long running processes can block the GUI, since they are running in the same thread. I shall most likely put the Gtk main loop in it's own thread so I can run run-longing processes in seperate threads, and then communicate between them.
  • The view cannot call a function on the back end, and wait for a response (for example with XmlHttpRequest). Rather it can only send the back end a signal.
  • The back end is not stateless. This is greatly simplifying. Most web programming frameworks have a lot of code to maintain state by storing it and accessing it on future requests by reading cookies stored on the client.
  • Currently, I have nothing like server side tags that are the bread and butter of most web programming frameworks. This typically works via string replacement, so I could either find a library to add this functionality, or make it easier to do string replacement with the HTML. This is typically desirable for a web app since you want to configure the HTML before it is sent from the server. Less important in a client app, but still, some string replacement of HTML may save some effort in writing complex javascript against the browser DOM.
Ok, let's get to the good stuff. To bin file is called "veritas". Running this file creates a VeritasWindow and then starts the Gtk main loop. The Gtk main loop is there because the Webkit window has to run inside something, and I chose a Gtk Window for this because of the simple integration with Ubuntu.

A VeritasWindow only does 2 things so far. It tells it's baseclass "HTMLWindow" what html file to load, and it listens for signals from that view. Later, it will create new HTML5 Windows and do other stuff in response to signals from the HTML view.

HTMLWindow is meant to be used only as a base class. First, it creates a top level menu so you can quit the app, and also, I think that apps should have menus (I haven't really thought through how menus will work in this system yet, but I'm hoping that DBUS Menu helps me out). Then it loads the HTML that the subclass told it to load. It also listens for signals from the view, parses the signals and has what is essentially a virtual function called "on_html_message" for subclasses to override. You should be able to receive messages from the view without looking at the internals of how it works. Among other things, this is platform specific.

main.html is the HTML5 code for the main window. All it does now is send a signal that it is loaded, and you can see that I added a heading. When paired with main.css the layout and look and feel of the UI will be controlled completely in the view code.

helpers.js is a file that I think I may need to handle platform specific signals sent to the view. Of course, you can always call "execute_script" and send whatever you want from the backend, but I think it's cleaner to expect well formatted signals from the back end instead.

Conclusion
So, that's basically all the boiler plate for making an HTML5 client app for Ubuntu. This represents a few hours of work on my part to make a re-usable and extensible system.
My next steps will be to do some database programming with sqlite, then I'll probably build a data input window for it. This certainly calls to mind Rails-like thinking (hmmm, I have the model, why can't I generate the view from that on the fly?), but, I don't think I want anything that complex. After I finish Veritas, I'll then extract the base classes and such, and perhaps create a Quickly template, then go ahead and work on my certain to be more complex communication application.

Read more
Rick Spencer

Smoke Tests

The QA team has started a page for getting up the minute automated results from smoke testing of daily images. Check it out! They still have more smoke tests to set up, but everything is running automatically from daily builds. You can check to see if the latest build installs and if basic tests run. If they do, it's probably worth testing with that build. If it's not, then the team should be busy at work at making it work testing!

Also, this page is just on small step in the blueprint for smoke tests results page.

Read more
Rick Spencer

12.04 Quality Initiatives Update

It's been 2 weeks since UDS. We left UDS with a lot of big plans about developing 12.04 in a precise way. So, in 2 weeks, we've gotten a good start. Here's a slapdash update before I leave for the weekend.

+1 Maintenance and Daily Quality
For the last week, we've had an installable and usable image every day! Colin is keeping a wiki until we have a proper dashboard set up.

Upstream Testing
A lot has gone on in this area. The QA team spent this week getting a QA lab set up, so we have a place to run automated tests. The Desktop team is working with the Dx team to get automated compiz testing set up, hopefully as early as next week. I need to circle back with other upstream projects, especially the ones that Ubuntu Engineering make, to check on their progress.

Distro Acceptance Testing
Didier Roche has started documenting tests to be run before uploads of Unity and working with the QA team to figure out how best to track them. You can check out his progress here.

QA Lab
Like, I said, the QA team has been working on the QA lab. We have trunk tests and distro acceptance tests to run. We need hardware! Here's a shot that Pete Graner took of the new rack for Open Stack testing:
Lots going on to make a 12.04 in a precise way! You can see the QA team's current work items here.

Read more
Rick Spencer

Some Rock Solid things are Quite Beautiful

I mentioned at the closing session of UDS last week, that it was remarkable UDS due to the amount of time spent discussing how we build Ubuntu, not just what we will build. As a community, we have blazed many new trails in software development and delivery, and I feel strongly that we are standing at a nexus where we will be able to collect the wisdom of the past seven years of developing an Open Source Community Distro and apply that wisdom in the future in a way that introduces a step function in our adoption curve as we pursue our goal of a mainstream free desktop.

Most of these "how should we build it" discussions circled around building and maintaining development velocity in 12.04 so that we could add new features that users need while maintaining and delivering the quality they also need. Fortunately, we laid some ground work in the 11.10 cycle. Pete Graner led the QA team in 11.10. Along with some new QA staff, they instituted some important practices, like automatically testing the daily ISO each day, and they set up a QA lab for running tests automatically, along with reporting of those test results.

Acceptance Criteria
How we assess and accept code from upsreams within Canonical was an area ripe for discussion. We arrived at UDS with a strong view about how we should be doing this, and we had three discussions about it at UDS. Jason Warner this area of collective effort "Acceptance Criteria". Please note that for 12.04 and the foreseeable future, this applies ONLY to code developed by Canonical, or was call them "Canonical upstreams". There are two main goals of this effort:

  • Ensure that landing code from Canonical upstreams does not introduce issues that make the development release too hard to use and develop on, thus slowing down velocity for everyone.
  • Encourage Canonical upstreams to fix bugs throughout the cycle, and not to wait until after Feature Freeze to focus all efforts on bug fixing.
As a result, we should see faster development, and a higher quality final product.

Acceptance Criteria means that upstreams have some responsibilities, and Ubuntu has some responsibilities. For upstreams, it boils down to "treat your trunk as sacred". Practically, it requires:
  • There is a trunk of code bound for Ubuntu.
  • This trunk always builds automatically.
  • This trunk has tests that are always passing automatically.
  • All branches are properly reviewed for having both good tests and good implementation before merged into trunk.
  • Any branch that breaks trunk by causing automated tests to fail or causes trunk to stop building, are immediately reverted.
For Ubuntu Engineering, the responsibilities include:
  • Every maintainer in Ubuntu must have a test plan for upstream trunks that are run before uploading to the development release.
  • Tests in the test plan that are automated can be run with the help of the QA team.
  • Tests in the test plan that are manual can be run with the help of the community team, (and the community QA Lead that is to be hired)
  • Refrain from uploading a trunk into Ubuntu if there are serious bugs found in testing that will slow down people using the development release for testing and development.
  • Revert uploads that break Ubuntu, as there is no point in having the latest of a trunk in Ubuntu if it's broken and just slowing everyone down.
  • Add tests to upstream projects for the Ubuntu test plan if serious bugs do get through that cause a revert.
ISO integration testing
Every day the QA team automatically runs tests on the ISO produced that day, if any. This was set up in 11.10. For 12.04, we will expand on this effort substantially. First, by growing the body of tests run. Secondly, by automatically reporting on the quality of the ISO each day. Finally, by responding to the results of the tests immediately, see the next section.

Daily Quality
We will strive to ensure that we have a new daily ISO each day. If the QA team finds an ISO to be "untestable" or failing critical tests that will hamper development velocity that day, we will respond by trying to fix the issues. For issues that cannot be foreseeably fixed within 3 hours, we will typically back out those changes. After the issue is addressed by being fixed or backed out, we *will spin another ISO*. We will collect metrics on what percentage of days we have a working and testable ISO.

Pre-archive Testing
Of course, catching problems in ISO testing and fixing them everyday is nice, but stopping the problems from reaching Ubuntu in the first place is even better. With that end in mind, Evan Dandrea ran a very interesting session about testing library and other changes before uploading them to the archive for the development release.

This will start out simple. The QA team will be able to install the latest ISO in their test environment, then pull an updated package from a PPA. A tool that I lovingly named "tool 2" will be created that will use rdepends to find packages that both depend on the newly upagraded package and also have tests worth running. Tool 2 will then run those tests and report the results. In this way, issues with libraries and other transitions can be fixed before they get into the development release and slow everyone down.

The next step, which I sincerely hope we get to during 12.04 development is to make tool 1. Tool 1 takes the output of tool 2, and judges if it should copy the newly updated package, or some set of packages, into the release archive. If we get tool 2 set up, then uploads to the development release would first go into the proposed archive for the development release, tested there, and only added to the release archive when found to be "ok" by the tests.

+1 Maintenance Team
Colin Watson is leading an experiment for 12.04 development. He will be leading a small staff of rotating engineers who are focused soley on the stability of the development release. We plan to learn from this effort, and see if we should repeat it, tweak it, or drop it for future versions. In any case, these efforts are meant to enhance, not replace, the generally diffused responsibilities for quality of the ISO and the archive. Colin led a good UDS session on the topic. The priorities defined there being:
  1. Deal with ISO smoke test issues, includes install images being buildable
  2. Upgrades through development releases work day to day, look at conflict checker
  3. All packages in main are installable
  4. All packages in main are buildable
  5. Component-mismatches / MIRs / etc.
  6. Finishing library/NBS transitions through archive (beyond main)
  7. All packages in the archive are buildable
Summary
All in all, these are a lot of structural changes to how we approach building Ubuntu and ensuring the quality of it. Here is a table to highlight some of the key changes.


Practice 11.10 12.04
Canonical upstream automated tests prevelant in some projects, not others all projects will have automated tests
Canonical upstream automated builds prevelant in some projets, not others all projects will ahve automated tests
Canonical upstream branch reviews all projects all projects
Canonical upstreams reverting branches that "break" trunk occurs in some projects, not others, not always
all projects will revert branches that break trunk
testing of Canonical upstream code before uploading to development release not common or systematic all Canonical upstream projects will have a test plan that is executed before each upload
reverting Canonical upstream code from the development release development release waits for an upload with "fixes" uploads that slow velocity for others are backed out
Daily ISO testing Limited test coverage, test failures not immediately responded to more test coverage, fix issues and respin within the same day
Daily ISO Can go for days or longer wihtout a working daily A broken daily ISO becomes the #1 priority for whoever broke it
Pre-archive testing "Call for testing", not totally systematic, no way to know what tests were run Add ability to run tests for rdpends before release, potentially test in proposed for the development release
Archive Maintenance Responibility diffused throughout team Responibility diffused throughout team, complimented by a small dedicated crew

Read more
Rick Spencer

Stealthy, An App to Pause Desktop Logging


Here's a quick preview of an app I'm working on. It's called "Stealthy" because it provides a kind of "Stealth" mode by pausing Zeitgeist from logging while it's running. The upshot is that when you see the indocator, your activities during that time won't show up in Unity. Quitting Stealthy starts the logging again.

It's also got a "Delete History" function, that just clears all Zeitgeist and GNOME most recently used history, giving you a clean slate.

Here's a video demo:


D'oh ... I just realized I searched for the wrong things to demonstrate that Delete History command worked. Anyway, trust me, it would have looked like it worked if I searched for the right thing! :)

You can try it by running from the branch( bzr branch lp:~rick-rickspencer3/+junk/stealthy ). However, I plan to add a little ninja icon instead of the stock inidcator icon, and also a warning dialog for the Delete History function. Then I'll put it in my PPA, and also probably sell it in the Software Center for the minimum price.

Read more
Rick Spencer

Using PyGame in a PyGtk App

I've written a few games in pygame, but never quite finished them. Generally because the complexity of adding user interactivity for things outside the essential game play. While pygame is a sweet sprite library, it has no widget toolkit, so things like collection a string from the user, having menus, etc... are all incredibly tedious to program. Adding something like a high score screen is as much effort as game programming itself.

Pygtk, however, has a rich widget toolkit, but it's hard to use it from within a pygame app. This is difficult in part because Gtk needs a gtk.main loop, but a typical pygame app is using it's own loop with clock.tick(). As a result, there are threads locking each other out, and generally craziness ensues.

But, it turns out, you can embed a pygame surface within a pygtk app, and it's actually pretty easy. So you can use just the one gtk.main loop, all interupt programming, access to the whole Gtk toolkit, and also all the easy loading of images, playing of sounds, collision detection, and other sprite functionality of pygame. It's really the best of all possible worlds.

I wrote a minimal demo script to show how to do this. The "game" is just a single sprite that you can move with the keyboard. You can see the screen in the screenshot at the top of this post that there is a gtk menu being activated!

I didn't intend this to be a tutorail on pygame, but rather integrating pygame in pygtk, so all the "game" does is let you move a sprite around.

It's probably easiest to look at the whole script, but let me point out some of the steps involved. I'll assume that you've already set up a gtk.Window with your menus and such. So, the first step is to set up a gtk.DrawingArea that will become the pygame surface. It's normal code to add it to your window:

      da = gtk.DrawingArea()

da.set_size_request(300,300)
da.show()
vbox.pack_end(da)
da.connect("realize",self._realized)

This last line, connecting to the realize signal, is important. It's important because you use the gtk.gdk.window.id to associate the drawing area with pygame. The drawing area's window won't have an id until it's been rendered by gtk. So the then in the handler for the realize event, we set up the association with the drawing area, and also start actually drawing the game:


def _realized(self, widget, data=None):
os.putenv('SDL_WINDOWID', str(widget.window.xid))
pygame.init()
pygame.display.set_mode((300, 300), 0, 0)
self.screen = pygame.display.get_surface()
gobject.timeout_add(200, self.draw)
The first three lines essentially set up the drawing area as the pygame surface. The fourth line creates a global object for a pygame.screen object that will be used in the draw function. Finally, the draw function gets caslled every 200 milliseconds via a gobject timeout. This is as much easier way of setting up the game loop than the normal pygtk way of blocking the clock for a certain number of ticks, as gobject.timeout_add plays nicely with the gtk.main loop.

In a real game, you'd probably put an "update" function on the timeout to do things like check for collisions, random events, and etc... (You'd also probably use a much smaller time out period than 200 milliseconds). However, our game only has one sprite, and it doesn't do anything but move so we don't need that functionality. But how do we tell it move? In pygame, every time through the main loop you pull all the events off the event stack. With pygtk, you use signals to handle the input when it occurs.

To do this, in the __init__ function for the window, we set up our pygame objects, and then connect to the key-press signal:
      #set up the pygame objects

self.image = pygame.image.load("sprite.png")
self.background = pygame.image.load("background.png")
self.x = 150
self.y = 150

#collect key press events
self.connect("key-press-event", self.key_pressed)

Then in the signal handler, we manipulate those objects. Typically, you'd be storing the x and y coordinates in a sprite object, but since there is only one sprite to track in this "game" we can handle it more simply. So in the key_press handler, we just detect if an arrow key was pressed, and we adjust the game data appropriately:


def key_pressed(self, widget, event, data=None):
if event.keyval == 65361:
self.x -= 5
elif event.keyval == 65362:
self.y -= 5
elif event.keyval == 65363:
self.x += 5
elif event.keyval == 65364:
self.y += 5
Then, every 200 milliseconds, the timeout comes buy and tells pygame to draw everything using normal pygame functions:


def draw(self):
self.screen.blit(self.background,[0,0])
rect = self.image.get_rect()
rect.x = self.x
rect.y = self.y
self.screen.blit(self.image, rect)
pygame.display.flip()

return True
Since pygame is getting update in gobject timeout and normal Gtk signal handlers, and not in it's own gameloop, gtk.main is free to run and handle menus, and even display dialogs:

Read more