Canonical Voices

What The Raving Rick talks about

Rick Spencer

User Interaction with Ubuntu Components



I am so very able to amuse myself with all these funny pictures, but there are other amusing subreddits other than funny. So today I added the ability to choose a different subreddit. This involved diving into the world of Ubuntu Components.

Ubuntu Components were surprisingly functional, but as you will see, they are still a work in progress.

So, the first thing I needed to do was to make a way for the user to say that they want to change subreddits. Ubuntu Touch provides the bottom edge for your application to add a list of commands. This is a nice way to do it because it means that the screen isn't cluttered with commands, but users know exactly where to go when they want them.

The first thing to do is to ensure that the top level of your app is a MainView, and then you are presenting the content in a Page. So, roughly your apps structure looks like ...

//imports
MainView
{
    //set MainView properties
    Page
    {
        //app UI content
    }
    //components outside pages
}

I wanted to add a "subreddit" button. To do that, I set the tools property of the page to a list of actions. So far there is only one action. Essentially, I am defining a button. I tell it the text I want, the icon to use (I downloaded an icon that I wanted) and the action to take. So this goes inside the Page tag:

        tools: ToolbarActions
        {
        Action
        {
            text: "SubReddit"
            iconSource: Qt.resolvedUrl("reddit.png")
            onTriggered: PopupUtils.open(subRedditSheet)
        }
    }
Now you can see that if swipe from the button, I get my button.
But what is that action? What I did was create a ComposerSheet that allows the user to input the reddit that they want. You do this by defining a Component that wraps a ComposerSheet. A ComponentSheet is kind of like a dialog box. It handles putting Ok and Cancel buttons on for you. Note that you have to name the ComposerSheet "sheet", or you get errors. All I added was a TextField that I called "subRedditText, but you can fill the ComposerSheet with whatever you want.

I just have to tell it what to do when the user clicks Ok. I did a little refactoring from yesterday to create a "changeSubReddit function that gets called on start up, and can get called from here. I just pass it the subreddit. (Don't forget to import Ubuntu.Components.Popups 0.1 in order to use the ComposerSheet).

Then you call PopupUtils.open to pop it open when you want it (which I specify as the action for my reddit button in the ActionList).

    Component
    {
        id: subRedditSheet

        ComposerSheet {
            id: sheet;
            title: "Choose SubReddit"
            TextField
            {
                id: subRedditText
            }

            onCancelClicked: PopupUtils.close(sheet)
            onConfirmClicked:
            {
                changeSubReddit(subRedditText.text)
                PopupUtils.close(sheet)
            }
        }
    }


As you can see, the ComposerSheet doesn't have size logic yet (or as likely I did something wrong), but none the less, it works!

I got a surprising amount of free functionality from using Page, ToolbarActions, and ComposerSheet. On top of being easy and fun, it means that my app will inherit the look and feel, interaction patterns, translations and Ubuntu Touch style guidelines!



Read more
Rick Spencer

Time Waster Turbo Charge


After my post yesterday about browsing 9gag.com, I decided I should make an even more efficient time waster. Way back in the day, I made an app called "lolz". But a lot has changed since then. Including the emergence of Imgur. Imgur is a service that hosts images for Reddit. And Reddit is the world's most efficient time waster.

Also, Imgur has a nice API, so access to the data seemed pretty easy. Thus, I bring you, the greatest time wasting app of ever.

Here's how I got started.

QML often uses a Model-View-Controller architecture built in. I found that I can take advantage of this by for my app because. Specifically, the imgur app optionally serves up XML. If you have the option to get XML, use it! It makes things go much faster.

So, the heart of the app as it exists today is my XmlListModel. An XmlListModel essentially turns XML into a list of objects or key/value pairs that other QML components can use. So, I made a model like this:

    XmlListModel
    {
        id: imagesXML;
        query: "/data/item";
        XmlRole
        {
            name: "imgURL";
            query: "link/string()";
        }
    }

There are 2 parts, the first is the "query". This tells the model what types in the XML to look for. The picture from Imgur are "items" so I tell the model to look inside data tags for item tags.

The second part is a set of XmlRoles. XmlRoles convert the info in the Xml tags into key value pairs for other components to consume. So, I tell it to make a key called "imgURL" that is paired with value of whatever string is stored in the "link" tag. In the imgur API, "link" is a url that goes directly to the image.

But where does the XML come from? Typically, you would use the source property to point the XmlListModel to a url with xml. But this won't work with the imgur API because you need to set a header in the http request that includes your client idea for the API. Well, at least I couldn't figure out how to tell the XmlListModel to set a value in the header.

However, I didn't fret. Instead I used good old javascript XMLHttpRequest to get the XML, and then to set the xml property for the list model. So I made an init() function that runs when the app is ready.

    function init()
    {
        var req = new XMLHttpRequest();
        var location = "https://api.imgur.com/3/gallery/r/funny/time/1.xml";
        req.open("GET", location, true);
        req.setRequestHeader('Authorization', 'Client-ID xxxxxxxx');
        req.send(null);
        req.onreadystatechange = function()
        {
            if (req.readyState == 4)
            {
                imagesXML.xml = req.responseText;
                activityIndicator.running = false;
                activityIndicator.visible = false;
                imagesView.visible = true;
            }
        };

This function makes a request, and when it gets a response it sets the XmlListModel's xml property to the responseText. Again, this is very classic AJAXy programming. Then I do a few lines of setting the UI. You may notice that this includes making the imagesView visible. imagesView is the list view that I created to be the view for the model.

        ListView
        {
            visible: false;
            id: imagesView;
            model: imagesXML;

            orientation: Qt.Horizontal;
            anchors.fill: parent;
            delegate: MouseArea
            {
                height: parent.height;
                width: root.width;

                Image
                {
                    width: root.width;
                    height: parent.height;
                    fillMode: Image.PreserveAspectFit;
                    source: imgURL;
                }

                onClicked:
                {
                    if(children[0].fillMode == Image.Pad)
                    {
                        children[0].fillMode = Image.PreserveAspectFit;
                    }
                    else
                    {
                     children[0].fillMode = Image.Pad;
                    }
                }
            }
        }

A ListView has 2 key parts. The first part specifies the model for the view using the model property. Easy enough.

The second part is the "delegate". A delegate is the component that you create for each entry in the model. As you can see, I chose to make a MouseArea for each imgurURL. The MouseArea, in turn, contains an Image for displaying the image. The MouseArea is set up so I can do some interactions with the Image as desired. Currently, a tap toggles the Image between resizing to fit the size of the Image and displaying the Image at it's normal size.

The great thing about the ListView is that it is a Flickable. So I get the nice flicking behavior of scrolling the list left and right for free! The ListView does other key things, especially, loading the images on demand. It only loads the images when they are scrolled into view. This saves a lot of network and memory resources.

So, for the next features, I think I shall allow the user to choose a subreddit to browse.

Read more
Rick Spencer

Time Killer Extrordinaire

Every morning I update to the new Ubuntu Touch Image [change log] on my Nexus 7. Then at lunch, I  "use" it to check Facebook and read funny posts. Lunch time time killing use case nailed.

Read more
Rick Spencer

Sweet Ubuntu Device QtCreator Integration

I spent a bit of today using the Ubuntu Device integration features in Qt. It's fresh software, but it's really easy and fun. Here is the development version of the game I am running running on my desktop. Noticed that I set the size of the window and therefore the play area very intentionally. But, I had to think to myself "will the touch interactions work ok on my tablet?" "What about the sizes?". Fortunately, getting it onto my tablet is pretty easy.

There is a device button in the left hand channel of QtCreator. I connected my Nexus 7 to my desktop via USB. Clicked Detect Devices, and there it is! Look at the many buttons that will make managing my device easier. For example, I am looking forward to trying the Upgrade to Daily Image button tomorrow. 

So, how do I run it on my Nexus 7? I just use the command to do so! Notice there are other cool commands there too to try later. 
After picking "Run on Device" my app showed up on my tablet. As you can see from the screenshot, it had some issues! However, the touch screen worked the way I was hoping. Obviously, I need to think more about sizing and containment to make it all work correctly. Fortunately, it will be very easy to test it all.

Of course, I wanted a screenshot for this post. But how would I get that? With the Tools -> Ubuntu -> Device menu, of course! This menu has some other useful functions for managing the device. For example, the apt-get menu will allow me to install dependencies for my app.
All in all, I'm really pleased with the Ubuntu Device integration. It seems like will help make app development for my tablet and phone easy and fun.

Read more
Rick Spencer

Extract Class Refactor Built in QtCreator

Following up from my post about how I think about inheritance yesterday, I thought I'd do a quick post about a refactoring feature built into the QtCreator editor.

In this example, I decided I wanted to add a box that lets the user enter a name for a high score if they achieved it, and then to display all the high scores. I got started by creating an UbuntuShape with a column and sub components in main.qml, but quickly realized that I will have a lot of behavior and presentation to manage. This would be much easier to develop in it's own component (or "class" as I think of it).

So, I just right clicked in the editor and used the refactoring menu to "Move Component into Separate File." 
I got a dialog that asked for a name, I chose "HighScoreBox" and it created a new file for me, and replaced all of my QML code in main.qml with just the little bit of code needed to declare the object.

Now I am ready to properly develop the behavior for the component. Like any good refactoring tool, the code kept working. 




Read more
Rick Spencer

Gotta love the "developer art" ... those placeholder images should be replaced by sweet Zombie artwork as the game nears completion.
For a long time I resisted the QML wave. I had good reasons for doing so at the time. Essentially, compared to Python, there was not much desktop functionality that you could access without writing C++ code to wrap existing libraries and expose them to QML. I liked the idea of writing apps in javascript, but I really did not relish going back to writing C++ code. It seemed like a significant regression. C++ brings a weird set of bugs around memory management and rogue pointers. While manageable  this type of coding is just not fun and easy.

However, things change, and so did QML. Now, I am convinced and am diving into QML.
  • The base QML libraries have pretty much everything I need to write the kinds of apps that I want to write.
  • The QtCreator IDE is "just right". It has an editor with syntax highlighting and an integrated debugger (90% of what people are looking for when they ask for an IDE) and it has an integrated build/run system.
  • There are some nice re-factoring features thrown in, that make it easier to be pragmatic about good design as you are coding. I also like the automatic formatting features.
  • The QML Documentation is not quite complete, but it is systematic. I am looking forward to more samples, though, that's for sure.

In my first few experiences with QML, I was a tiny bit thrown by the "declarative" nature of QML. However, after a while, I found that my normal Object Oriented thought processes transferred quite well. The rest of this post is about how I think about coding up classes and objects in QML.

In Python, C++, and most other languages that support OO, classes inherit from other classes. JavaScript is a bit different, objects inherit from objects. While QML is really more like javascript in this way, it's easy for me to think about creating classes instead.

I will use some code from a game that I am writing as an easy example. I have written games in Python using pygame, and it turned out that a lot of the structure of those programs worked well in QML. For example, having a base class to manage essential sprite behavior, then a sub class for the "guy" that the player controls, and various subclasses for enemies and powerups.

For me, what I call a QML "baseclass" (which is just a component, like everything else in QML) has the following parts:
  1. A section of Imports - This is a typical list of libraries that you want to use in yor code. 
  2. A definition of it's "isa"/superclass/containing component - Every class is really a component, and a compnent is defined by declaring it, and nesting all of it's data and behaviors in curly brackets.
  3. Paramaterizable properties - QML does not have contructors. If you want to paraterize an object (that is configure it at run time) you do this by setting properties.
  4. Internal compotents - These are essentially private properties used within the component.
  5. Methods - These are methods that are used within the component, but are also callable from outside the component. Javascript does, actually, have syntax for supporting private methods, but I'll gloss over that for now.
In my CharacterSprite baseclass his looks like:

Imports

 import QtQuick 2.0  
import QtQuick.Particles 2.0
import QtMultimedia 5.0

Rectangle is a primative type in QML. It manages presentation on the QML surface. All the code except the imports exists within the curly braces for Rectangle.

Paramaterizable Properties

   property int currentSprite: 0;  
property int moveDistance: 10
property string spritePrefix: "";
property string dieSoundSource: "";
property string explodeParticleSource: "";
property bool dead: false;
property var killCallBack: null;

Internal Components

For readability, I removed the specifics.
   Repeater  
{
}
Audio
{
}
ParticleSystem
{
ImageParticle
{
}
Emitter
{
}
}

Methods

With implementation removed for readability.
   function init()   
{
//do some default behavior at start up
}
function takeTurn(target)
{
//move toward the target
}
function kill()
{
//hide self
//do explosion effect
//run a callback if it has been set
}

Now I can make a zombie component by creating a new file called ZombeSprite.qml and simply set some properties (and add some behavior as desired). Note that I declare this component to be a CharacterSprite instead of a Rectangle as in the CharacterSprite base class. For me, that is the essence of defining inheritance in QML.

 CharacterSprite  
{
spritePrefix: "";
dieSoundSource: "zombiedie.wav"
explodeParticleSource: "droplet.png"
Behavior on x { SmoothedAnimation{ velocity:20}}
Behavior on y { SmoothedAnimation{ velocity:20}}
height: 20
width: 20
}

I can similarly make a GuySprite for the sprite that the player controls. Note that
I added a  function to Guy.qml becaues the guy can teleport, but other sprites can't.
I can call the kill() function in the collideWithZombie() function because it was inherited from the CharacterSprite baseclass. I could choose to override kill() instead by simply redefining it here.
 CharacterSprite   
{
id: guy
Behavior on x { id: xbehavoir; SmoothedAnimation{ velocity:30}}
Behavior on y { id: ybehavoir; SmoothedAnimation{ velocity:30}}
spritePrefix: "guy";
dieSoundSource: "zombiedie.wav"
explodeParticleSource: "droplet.png"
moveDistance: 15
height: 20;
width: 20;
function teleportTo(x,y)
{
xbehavoir.enabled = false;
ybehavoir.enabled = false;
guy.visible = false;
guy.x = x;
guy.y = y;
xbehavoir.enabled = true;
ybehavoir.enabled = true;
guy.visible = true;
}
function collideWithZombie()
{
kill();
}
}

So now I can set up the guy easily in the main qml scene just by setting connecting up some top level properties:
   Guy {  
id: guy;
killCallBack: gameOver;
x: root.width/2;
y: root.height/2;
}

Read more
Rick Spencer

Dell Vostro without Windows Tax

I guess today is "Cyber Monday" or something like that? I'm looking for a reasonably priced laptop for my 13 year old daughter and saw that I had an email from Dell with some offers in it. After a little clicking around to compare different laptops, I ended up on a comparison page for the Vostro line. Naturally, I was drawn to the $299 priced laptop, when I noticed that there are 2 computers that differ only in the OS pre-installed, AND $70! This is the first time I've just naturally stumbled upon such a blatant exposure of the Windows tax.


Read more
Rick Spencer

The Road to 14.04




Copenhagen was quite a UDS. I found it to be very "tight" and well organized. The sessions seemed to be extra productive  I think compressing UDS down to four days helped us be more focused, and also less tired by the end. Maybe next one should be three days? Copenhagen was also great for the content. Working on getting the desktop running on the Nexus 7 was very interesting and fun. Also, we made a lot of progress in terms of how we make Ubuntu. I think this will be an unusually fun cycle thanks to some of the changes to our development process.
Sleestacks are monsters bent on keeping you in the past

One of the great things about my job is that I get to talk to so many people about their vision for Ubuntu. As you can imagine, I run into a lot of variety there. However, there is a certain shared vision that has come together. I think we all look forward to 14.04, we can mostly agree about what we want that release to look like.

Seeing the future through conversations with community members, developers, users, etc....
The 14.04 Vision
Imagine that you are running Ubuntu 14.04. What will your experience be like?

  • Robust
  • Fresh
  • Easy to Make
  • Ubiquitous

First, the quality will impeccable  You will not even think about up-time  The system will work close to perfectly. You will eagerly take updates and upgrades knowing that they will only make your system better. Applications will run smoothly and won't cause any system-wide problems.
Secondly, you will be able to get the freshest apps that you want on you client machines, and the freshest workloads on you servers. As a developer, you'll be able to deliver your applications directly to end users, and be able to easily update those applications.
Thirdly, will have an extremely efficient release process, one that also inspires confidence in developers. Good changes will reach end users quickly, while mistakes are easily caught and corrected well before users are exposed to them.
Finally, by 14.04 Ubuntu will run everywhere. The same Ubuntu will be on you phone, your tablet, your netbook, your laptop, your workstation, you cloud server hosts, and the instances powering workloads in your public and private clouds. The same product with the same engineering running everywhere. A simpler world with Free code everywhere.

How Do We Get There?

"It's more important to know where you are going than to get their quickly"
We have laid out the steps necessary to achieve this vision. We intend to make this real! In fact, we've already achieved some of the things necessary to get to where we want to where we want to be for 14.04. Overall, by 14.04 we need to:

  • Assure quality at every step of the development process
  • Improve application sand-boxing
  • Simplify the release schedule
  • Implement continuous integration in the Ubuntu development process
  • By 14.04 expand Ubuntu to include mobile form factors, such as phones and tablets

Assured Quality at Every Step

Leaping through time ensuring everything stays on track
In 13.04 we will change our full testing cadence from testing at each Alpha or Beta milestone, to doing full test runs every 2 weeks. This is about a 3 times increase in the rate of manual community testing. Furthermore, we will test more broadly, more deeply, and more rigorously, so that we will have a more complete view of the quality of Ubuntu during the development release.
We will also be leveraging some previous work to create a GUI testing tool. We call this tool "Autopilot" and it is designed to drive all the components of Unity in a testing environment. In 13.04 we will see expanded usage of this tool, and critically, dramatically more tests written. We will then be able to catch regressions in the Ubuntu user experience earlier, and ensure that fewer regressions make their way into the development release.
The one and only Martin Pitt has implemented a new test harness along with tests for GNOME. In this way, the Canonical QA labs will be able to identify regressions in GNOME as soon as they are introduced. This should allow GNOME developers to more quickly spot and fix problems, raising the overall quality of the GNOME code and improving the velocity of the GNOME developers.
Finally, after 13.04 ships, we will start doing updates in a new way. After 13.04 is a stable release, updates to that release will not be delivered to all users when available. Rather, updates will go out to a small number of users, and the system will automatically monitory whoopsie-daisy to ensure that users aren't experiencing issues due to the update before releasing the update to yet more users. We call this "phased updates".

Ubuntu Continuous Integration

Every day is like the day before, Daily Quality
There are 2 new things that we are doing in 13.04 to get us to the world of 14.04 where releases are easy and confidence inducing. First, Colin Watson has set up the Ubuntu build system so that all packages are built in a staging area (by reusing the proposed pocket for the development release). Only when a package is built succesffull along with all of it's dependencies are the packages copied into the release pocket and go out to the wider development release. This means that there will be no more breakages due to out of sync packages when you update. Compiz and Nux will always be built together before they are copied over. The whole xorg stack too.
Building things in proposed provides an opportunity to assure the quality of the packages before they go into the release pocket. This will be accomplished with auto-package testing. Essentially, tests that come with a package will be run when the package is built. Additionally any package that depends on the new package will also run it's auto-package tests. The package will only be copied into the release pocket when all of the tests pass!

Start Application Insulation

Protecting the world from pure evil

By 14.04 we expect most applications to be run in a secure manner, so that poorly written or even malicious applications will have limited opportunity to do damage to a users system. In 13.04 the Security Team is moving ahead with lots of work to enable App Armour throughout Ubuntu, in addition to isolating some common infrastructure in use today, such as online accounts, gnome-keyring, and even dbus.
In this way, applications will be able to run and access only the small subset of system that is relevant to them. When a user installs an application it will come with an AppArmour profile that will ensure that the kernel can insulate the system from the application appropriately. The fruits of this labor should be widely visible by 13.10.

Simplified Releases

A time turner literally creates more time
Ubuntu has traditionally held a serious of Alphas and Betas. These had the purpose of ensuring that we had an installable image at least a few times during the release, and to provide an opportunity to do some wide testing of the system. This meant that several times throughout the release cycle we would stop development on Ubuntu, freeze the archive and roll a release.
Since the advent of daily quality, Ubuntu can install pretty much every day. Furthermore, we are opting for much more frequent testing than the milestones allowed. Therefore, the Alphas and Betas have limited utility, but would have continued to sap our development velocity. So, in 13.04, Ubuntu is making the bold move of skipping all Alphas, and having just a single Beta! This also allowed us to extend certain freezes, especially Feature Freeze. The new schedule has a much more time for finishing features and fixing bugs, and much less time in freezes.






Read more
Rick Spencer

Let's Roll with 12.10


As a consequence of our daily quality efforts, some very interesting developments have taken place for 12.10.

First, while knocking around UDS, it occured to me in a bit of a flash that all of the effort that we invest in freezing the archives to make Alphas and Beta releases for Ubuntu is wasted work that slows down our velocity. We have daily quality and we have started using -proposed in the development release, so the chance of having an uninstallable image is greatly reduced.

(you can read the discussion on @ubuntu-devel)

So, why do we have Alphas and Betas? After some discussion, it seems to come down to:

  1. Because we want to encourage widespread testing by community members on a variety of hardware at a regular cadence
  2. Because we want targets for features and bug fixes
  3. Because we need to test our ISO production capabilities
  4. Because we always had them

Does all the effort in freezing the archive actually help? I don't think so. In fact, I think it is counter-productive.

  1. We can do the same testing with daily images. Furthermore, we can do that testing at a cadence of our liking, or even out of cadence if we want to squeeze in a special test run at some point. The ISO tracker nicely accomedates this now.
  2. Freezing the archive, by definition, *stops* packages and therefore bug fixes and features from getting uploaded. 
  3. Surely we don't need to slow down everyone's work so that we can try producing ISOs, and surely we don't need to do it so often and early.
  4. Of course, "because we always did" is not much of a reason.

It seems that what is needed is a regular cadence of deep and broad testing by the community to augment our automated tests, along with trial runs to ensure that our ISO building tools and process are working. Thefore, I propose we:

  1. Stop with the alphas and betas and win back all of the development effort
  2. Increase the cadence of "ISO testing" to whatever we want or whatever the community team can manage
  3. Spin a trial ISO near what is not beta time
  4. Spin ISOs for release candidates

Read more
Rick Spencer

«Bonjour Le Monde»


Pardonnez-mois pour le massacre d'une belle langue ...

Je rêvais de donner un cours de la progrommation en français. Ce cours aurait deux objectifs. Le premier objectif est l'introduction du monde de la programmation. Le secondaire objectif est la pratique au français pour mois. Les etudes apprennent la programmation, et j'apprends à parler plus français.

Donc, je vous présente:
La Programmation pour Les Debutantes Absolus, En Mauvais Français

Je vais donner le cours dues fois par semmainne, dans la soir. Aussi, je vais avoir les heures de beaureau pour les questions et pour les discussions.

Nous allons utiliser Apprendre à programmer avec Python. Je vais couvre duex chaptre chaque classe, mais nous allons arrêter en chapter 12 ou avant. Donc, le cours sera en l'environ de tois semmaine.

Nous allons utiliser Google Hangouts pour les classes. Donc, c'est neccesaire d'avoir un webcam et microphone, mais le cours est gratuit. Aussi, le cours utilise ubuntu, naturalement ;)

En fin, j'ai besoin trouver les étudiants. Si vous voulez commencer à apprendre la programmation (et m'aidez avec français), vous pouvez laisser un comment ici, ou vous pouvez me trouver dans irc (rickspencer3 sur freendoe), ou m'envoyer un email. Après je trouve les ètudiants, nous allons trouver les bons heures pour les classes.

Read more
Rick Spencer


I have become quite fascinated by using HTML5 for rendering my GUIs on my Ubuntu applications. I love doing this, because I can continue to use Python as my library and desktop integration point, while being free to use cutting edge presentation technology.

I sat down with didrocks yesterday and we set off to create a simple Quickly template out of some of the code I've written for bootstrapping these projects. The template will be very very simple. All it will do is set up communication between HTML and Javascript in an GtkWebKit window, and a Python back end. Developers will be free to choose how to use the WebKit window. For example, they could use JQuery or a host of other javascript libraries if they chose.

Didrocks was adamant that we should expose the excellent debugger (called The Inspector) that comes with WebKit in the template. However, I have found that for GtkWebKit, the doucmentation is sketchy (at best), and the API is unpredictable in it's behavior. So, it took us 2 hours of experimentation and trolling source code to make an implementation that actually worked for showing The Inspector.

So, if you have been trolling the web looking for how to make this work .. I hope this works for you! Without further ado, here is a commented minimial example of a WebKit window that shows the Inspector. I also pushed a branch with just the code in case you find that easier to read or work with.

 from gi.repository import WebKit  
from gi.repository import Gtk
import os
#The activate_inspector function gets called when the user
#activates the inspector. The splitter is a Gtk.Splitter and is user
#data that I passed in when I connected the signals below.
#The important work to be done is to create a new WebView
#and return it in the function. WebKit will use this new View
#for displaying The Inspector. Along the way, we need to add
# the view to the splitter
def activate_inspector(inspector, target_view, splitter):
inspector_view = WebKit.WebView()
splitter.add2(inspector_view)
return inspector_view
#create the container widgets
window = Gtk.Window()
window.set_size_request(400,300)
window.connect("destroy",Gtk.main_quit)
splitter = Gtk.Paned(orientation=Gtk.Orientation.VERTICAL)
window.add(splitter)
#create the WebView
view = WebKit.WebView()
#Use set_property to turn on enable-developer-extras. This will
#cause "Inspect Element" to be added to the WebKit's context menu.
#Do not use view.get_settings().enable_developer_extras = True,
#this does not work. Only using "set_property" works.
view.get_settings().set_property("enable-developer-extras",True)
#Get the inspector and wire the activate_inspector function.
#Pass the splitter as user data so the callback function has
#a place to add the Inspector to the GUI.
inspector = view.get_inspector()
inspector.connect("inspect-web-view",activate_inspector, splitter)
#make a scroller pane to host the main WebView
sw = Gtk.ScrolledWindow()
sw.add(view)
splitter.add1(sw)
#put something in the WebView
html_string = "<HTML><HEAD></HEAD><BODY>Hello World</BODY></HTML>"
root_web_dir = os.path.dirname(os.path.dirname(__file__))
root_web_dir = "file://%s/" % root_web_dir
view.load_html_string(html_string, root_web_dir)
#show the window and run the program
window.show_all()
Gtk.main()

Read more
Rick Spencer

Thanks mterry! (Quickly Tutorial Updated) :)

So I decided I had to bit the bullet this morning and update the ubuntu-application tutorial for Quickly, since desktopcouch is no longer supporter and I therefore removed CouchGrid from quickly.widgets. So I start looking through the tutorial to make notes about what I need to change, and I find everything already fixed by Michael Terry! Amazing. I love working (or in this case not working) on open source projects ;)

Read more
Rick Spencer

12.04 Quality Engineering Retrospective

Ubuntu 12.04 LTS Development Release (Precise Pangolin) Beta 2 is (most likely) going to be released today. This means we are getting quite close to final release! I have been running Precise as my only OS on both of my computers for months now, and it is far and away my favorite desktop I've ever used. It is beautiful, fast, and robust. This post is about the robust part.

After we release Beta 2, we should continue see the Ubuntu and Ubuntu Server improving day by day and to quickly achieve release quality. I have asked Kate Stewart, our release manager, to do everything in her power to ensure that starting with Final Freeze on April 12th each daily image is high enough quality that it could be our final release.

Why am I so confident that Ubuntu will only get better and better? Because Ubuntu stayed of usable quality throughout the development cycle. This created a virtuous cycle, where it was easier to develop and test with, so was then easier to maintain the quality.

After the last UDS, I described how we planned to maintain quality throughout the release. We followed those plans, and got the expected results. I am very very proud of

But rather than repeating the activities that we did, I thought I would look back and see what values arose from those practices. I think it was the following values that really had the impact, and that we should build on for 12.10 and beyond.
  1. Verify and fix before landing major changes in Ubuntu
  2. Not waiting when something breaks to take action
  3. Test for testability, then test rigorously
Let me provide some specific example for each.

Verify And Fix Before Landing

Previously, teams would rush to meet certain development milestones, with the goal of meeting the letter of the law. A package had to be uploaded before Feature Freeze, for example, so a team would just push what they had, even if it was not proven to work, or even know not to work at all!

In 12.04 we took a different approach with packages that tended to have significant impact on the usability of Ubuntu, or that were otherwise important. In fact, the xorg team has been following this approach for many releases, using their "edgers" PPA and calls for testing. For many releases new versions fof X were vetted by a community of dedicated community testers before being uploaded to the development version of Ubuntu. In 12.04, they took this even a step further. In previous releases, while different parts of the X stack were building, user of the development release might upgrade while the archive was in an inconsistent state, because different parts of the new X stack were built while others were still builiding or waiting to build. This could result in situations where X was uninstalled altoghter! In 12.04, the X team actually built the X stack separetly, and then copied the binaries into the archive. Totally verified and fixed before landing!


Many folks have noted the dramatically increased robustness of Unity during the 12.04 development cycle. The Unity team did a lot of work to tune and improve their development processes. This included using a PPA for each new release of Unity, and then having that release rigorously tested (with test cases, a testing tool, etc...) by community members with different kinds of graphics hardware and other setups. Then regressions and problems were fixed in the PPA, testing repeated, and only then being uploaded to the development release.


Ultimately, though, I think Ubuntu Cloud must take the prize for rigor in this area, with their OpenStack testing and verification process. On each and every commit to OpenStack uptream, OpenStack gets deployed to a Canonical Cloud test bed (deployed with Juju, of course), then a full suite of tests run. If the tests pass, it gets automatically built into a PPA. When the team is ready to do a release into Ubuntu, they can make the many necessary tweaks in the PPA before uploading it to Ubuntu. This level of Precision allowed the Server team to stay with cutting edge OpenStack, while maintaning a system that was always working, and therefore testable.

Don't Wait when Something Breaks


This value has really taken hold in the Ubuntu community, and it has really helped. There are 2 areas that I monitor each morning. First, I check how the arcvhices look. I can do this because the Plus One Maintenance team, led by Colin Watson and Martin Pitt in turns, have written a tool that finds problems in he archives. Furthermore, each morning they strive to fix those problems. In this way, uninstallable packages and other problems are fixed before we try to spin that day's daily ISO.


After spinning the daily ISO the QA team runs a set of smoke tests on them. If the tests can run, or fail, the right engineering teams are notified, and either they try to fix the tests, or fix the test failure so we can try spinning the CD again. The daily response meant that it was pretty certain that issues were introduced in the last 24 hours, which in turn made them easier and faster to resolve.

Still, Ubuntu development is incredibly rapid. We didn't want to set up a situation where people were afraid to make changes because they might break something. Therefore, from the beginning of the cycle, we accepted that our testing would not catch everything, and that some things would break. So, we set the goal of quickly reverting changes that caused the development release to be hard to test or use. We only had to resort to this a few times. For example, at one point, LightDM was not able to load any but the default desktop. As a result, it was not possible to use desktops like Kubuntu, Xubuntu, etc... The change was reverted the same day so that testing could continue.

Test for Testability, then Test Rigorously

So, we now have automated testing of Canonical upstream code, as well as daily images and daily upgrade testing. However, we don't consider this the end of the testing process, but the beginning. In other words, we use the automated tests to tell us if the code trunks and images are worth testing harder.


In 12.04 development, we evolved our community testing practices to meet this needs. In the past we would do a "call for testing" which mean "please update and try out Ubuntu, let me notice if anything broke". In 12.04 a "call for testing" changed to include test cases so that we could know what worked, not just what broke, coverage of hardware and configurations by recruiting community members who had the right setups, and organized results.

This thought process was not limited to our only Canonical produced code, however. Before or soon after introducing potentially distruptive changes Nicholas Skaggs, our new Community Team member, collects test cases from the relevant developers, and than organizes community members to execute those test cases. He is also organizing these tests at important milestones, such as Beta 1 and now Beta 2.

Read more
Rick Spencer

We are getting closer to release! Beta2 freeze is tomorrow. Quality in 12.04 is looking very good today. However, we still see hundreds of bugs get fixed across desktop and server between now and April 26th. In the past, I've found that in the flury of activity it's easy to lose track of the most important bugs in all that noise, and then some scrambling ensues.

To counteract this, at least for myself, I had a couple of calls, with Jason Warner (Desktop Engineering Manager), Robbie Williamson (Server Engineering Manager), and Steve Langasek (Foundations Engineering Manager). We talked about what bugs we had (that we know about now) that would actually keep us from releasing as scheduled. We have a term called "release blocking bug", but in point of fact, almost none of them would actually keep us from releasing. The kinds of bugs that would truly make us slip a ubuntu release are ones that cause problems with existing OSs in multi-boot situations, serious bugs in the installer, serious bugs in update manager, bugs that result in a loss of networking, etc... Bugs that can reasonably fixed in an update do not block the release.

We decided that the best way to keep track of the very few bugs like this is to continue to track them as normal, but to set their importance as critical.

There is another set of bugs that I also ask the team to focus on. This set is more aspirational. I want us to fix all of the upgrade bugs that we find from automated testing, or at least all of the High and Critical importance ones. I would sincerely love to see every upgrade go smoothly for all of the millions of people who will be upgrading to Precise.

So, when I am going to start talking about pychart? Right now, in fact! Keeping tabs of bugs is boring, so must be automated, and I love automating things with Python. So, I wrote a program that scrapes the data from those 2 pages, store the info in a sqlite database, and generate a line graph each time I run it.

You can see all the code here if you want, but I doubt you do, it's pretty hacky. But, it was fun to bring together the ecellent json, HTMLParser, sqlite3, and pychart libraries.

Here's the pychart money shot:
        xaxis= axis.X(label=_("Date"), tic_interval=1,format = format_date)
yaxis = axis.Y(tic_interval = 2, label="Open Bugs")
ar = area.T(x_axis=xaxis, y_axis=yaxis, y_range=(0,None))
plot = line_plot.T(label="upgrade", data=graph_list, ycol=1, tick_mark=tick_mark.star)
plot2 = line_plot.T(label="blockers", data=graph_list, ycol=2, tick_mark=tick_mark.square)
ar.add_plot(plot, plot2)

can = canvas.init("/home/rick/Documents/bugs.png","png")
print ar.draw(can)
self.ui.image1.set_from_file("/home/rick/Documents/bugs.png")
def format_date(ordinal):
d = datetime.date.fromordinal(int(ordinal))
return "/a60{}" + d.strftime("%b %d, %y")

Read more
Rick Spencer

GObject Introspection Prompts

Dang, I hate how I often but "GIO" instead of "GOI".

Anyway, I'm starting a week of focusing on coding. Unfortunately I have a bunch of meetings that I cannot escape, but otherwise, I cancelled all non-essential meetings, and will be diving into the platform and working with the real application developer experience on Precise. Also, I have a few work items that I should really take care of.

Today, I started with a bite-sized morsel. I update quickly.prompts to use gobject introspection. The key value here being that you can now use quickly.prompts with a modern Quickly app.

The branch is waiting to be reviewed and merged here.

Read more
Rick Spencer

Girrrr: PyGame + Gtk in a GOI World

Back in August, I wrote a bit about how to embed PyGame into a pygtk app (and why it might be interesting to do that). Well, the world has moved on a bit, so today I updated the code sample to work with GObject Introspection.


It wasn't too hard to do, but did take a bit of digging around. I created a diff between the files and then commented on the diff, so you can see the required changes.


 === modified file 'game.py'
--- game.py 2011-08-25 12:14:00 +0000
+++ game.py 2012-02-08 10:22:50 +0000
@@ -1,41 +1,41 @@
import pygame
import os
#you can't import Gtk and GObject in the old way
#so delete these imports
-import gobject
-import gtk
#I haven't made quickly prompts work with introspection yet
#I think it will be easy, but in the meantime, we can't use
#quickly.widgets or quickly.prompts
-from quickly import prompts
#here's how to import GObject and Gtk
#you have to import GdkX11 or you can't get a widget's xid
+from gi.repository import GObject
+from gi.repository import Gtk
+from gi.repository import GdkX11
#"gtk" has to be changed to "Gtk" everywhere
#I used find and replace for this
-class GameWindow(gtk.Window):
+class GameWindow(Gtk.Window):
def __init__(self):
- gtk.Window.__init__(self)
- vbox = gtk.VBox(False, 2)
+ Gtk.Window.__init__(self)
+ vbox = Gtk.VBox(False, 2)
vbox.show()
self.add(vbox)
#create the menu
- file_menu = gtk.Menu()
+ file_menu = Gtk.Menu()
- accel_group = gtk.AccelGroup()
+ accel_group = Gtk.AccelGroup()
self.add_accel_group(accel_group)
- dialog_item = gtk.MenuItem()
+ dialog_item = Gtk.MenuItem()
dialog_item.set_label("Dialog")
dialog_item.show()
dialog_item.connect("activate",self.show_dialog)
file_menu.append(dialog_item)
dialog_item.show()
- quit_item = gtk.MenuItem()
+ quit_item = Gtk.MenuItem()
quit_item.set_label("Quit")
quit_item.show()
quit_item.connect("activate",self.quit)
file_menu.append(quit_item)
quit_item.show()
- menu_bar = gtk.MenuBar()
+ menu_bar = Gtk.MenuBar()
vbox.pack_start(menu_bar, False, False, 0)
menu_bar.show()
- file_item = gtk.MenuItem()
+ file_item = Gtk.MenuItem()
file_item.set_label("_File")
file_item.set_use_underline(True)
file_item.show()
@@ -44,10 +44,10 @@
menu_bar.append(file_item)
#create the drawing area
- da = gtk.DrawingArea()
+ da = Gtk.DrawingArea()
da.set_size_request(300,300)
da.show()
- vbox.pack_end(da)
#pygtk didn't require all of hte arguments for packing
#but Gtk does, so you have to add all the arguments to pack_end here
+ vbox.pack_end(da, False, False, 0)
da.connect("realize",self._realized)
#set up the pygame objects
@@ -70,7 +70,15 @@
self.y += 5
def show_dialog(self, widget, data=None):
- prompts.info("A Pygtk Dialog", "See it works easy")
+ #prompts.info("A Pygtk Dialog", "See it works easy")
#I just hand crafted a dialog until I can get quickly.prompts ported
+ title = "PyGame embedded in Gtk Example"
#a lot of the constants work differently
#gtk.DIALOG_MODAL -> Gtk.DialogFlags.Modal
#gtk.RESPONSE_OK -> Gtk.ResponseType.OK
#There's some info here to get started:
#http://live.gnome.org/PyGObject/IntrospectionPorting
#but I found that I had to poke around with ipython a bit to get it right
+ dialog = Gtk.Dialog(title, None, Gtk.DialogFlags.MODAL,(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OK, Gtk.ResponseType.OK))
+ content_area = dialog.get_content_area()
+ label = Gtk.Label("See, it still works")
+ label.show()
+ content_area.add(label)
+ response = dialog.run()
+ dialog.destroy()
def quit(self, widget, data=None):
self.destroy()
@@ -87,14 +95,14 @@
return True
def _realized(self, widget, data=None):
#since I imported GdkX11, I can get the xid
#but note that the properties are now function calls
- os.putenv('SDL_WINDOWID', str(widget.window.xid))
+ os.putenv('SDL_WINDOWID', str(widget.get_window().get_xid()))
pygame.init()
pygame.display.set_mode((300, 300), 0, 0)
self.screen = pygame.display.get_surface()
- gobject.timeout_add(200, self.draw)
+ GObject.timeout_add(200, self.draw)
if __name__ == "__main__":
window = GameWindow()
- window.connect("destroy",gtk.main_quit)
+ window.connect("destroy",Gtk.main_quit)
window.show()
- gtk.main()
+ Gtk.main()
I pushed the example to launchpad, in case you want to see the whole thing in context.

Read more
Rick Spencer

Bit of fun with JQuery and CSS

I stole some time to play a bit more with veritas and JQuery today. Instead of the ugly list that I had before, I wanted some interactivity. So I got started by adding a little css to make a "card" for each bottle.

 .bottle
{
width:300px;
background-color:rgba(0,0,0,.5);
}
Then I wrote a bit of javascript to make each div that I pass into the html into a JQuery "draggable", and do a bit of cheap layout.

    else if(signal == "add_bottle")
{
div = jQuery(data,{}).draggable();
div.css('position','absolute');
div.css('left', lft);
div.css('top',tp);
lft += 10;
tp += 10;
$( "#bottle_div" ).append(div);
}
Next I'll add some nicer layout. Then I'll start adding filters and dropdowns so I can sort and do other fun stuff.

Read more
Rick Spencer

In Vino JQuery, not a Socratic, dialogs

I spent a bit of today adding the capability to Veratas to collect user input in the form of a "dialog". I put "dialog" in quotes, because I used a JQuery dialog within the HTML, rather than a Gtk dialog window.

Before settling on JQuery for this project, I looked at it and YUI in some depth. I was attracted to YUI because it seems very very complete. In fact, it has a filterable and sortable data grid, which is very important to me, as most applications, when you get down to it, are really just CRUD apps.

However, I went with JQuery for Veritas because the samples and tutorials made it seem very easy to get things done, and Veritas has simple needs.

JQuery has a cool page where you can create just the javascript that you need, as well as a theme generator. Note the "grapey" dialog bar in the screenshot, I set that color in the theme generator.

What the Dialog Does
First thing was to lay out the dialog in the normal HTML way. Note that I set it to be display:none, by default.

  <div id="dialog" title="Enter New Bottle" style="display:none;width=00px">
<fieldset>
<p>
<label for="country">Country</label>
<input type="text" name="country" id="country" value="" placeholder="">
</p>
<p>
<label for="region">Region</label>
<input type="text" name="region" id="region" value="" placeholder="">
</p>
<p>
<label for="domain">Domain</label>
<input type="text" name="domain" id="domain" value="" placeholder="">
</p>
<p>
<label for="grapes">Grape(s)</label>
<input type="text" name="grapes" id="grapes" value="" placeholder="">
</p>
<p>
<label for="price">Price</label>
<input type="number" name="price" id="price" value="" placeholder="$">
</p>
<p>
<label for="rating">Rating</label>
<select name="rating" id="rating" value="" placeholder="">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
<option value="6">6</option>
<option value="7">7</option>
<option value="8">8</option>
<option value="9">9</option>
<option value="10">10</option>
</select>
</p>
<p>
<label for="taste">Taste</label>
<input type="text" name="taste" id="taste" value="" placeholder="">
</p>
<p>
<label for="image">Label Picture</label>
<input type="text" name="image" id="image" value="" placeholder="">
<button id="preview_button">Preview</button>
<img id="preview_image" src=""/>
</p>
<p>
<button id="submit_new">OK</button>
</P>
</fieldset>
</div>

Then, I created a "New" button, and wired it up to some code that I was able to get from the excellent JQuery demo pages to display the dialog. Note that the documentation made it really easy to copy and paste my way to success here, including figuring out how to choose different reveal effects and such.

        $( "#dialog" ).dialog({
autoOpen: false,
width: 600,
show: "blind",
hide: "blind"
});
$( "#new_button" ).click(function() {
$( "#dialog" ).dialog( "open" );
return false;
});
The dialog includes a submit button. I wired this up to create a JSON object and send a signal with all this data to the backend using the "send_message" javascript function.
        $( "#submit_new" ).click(function() {
bottle = {"country": + $( "#country" ).val(),
"region": $( "#region" ).val(),
"domain": $( "#domain" ).val(),
"grapes": $( "#grapes" ).val(),
"price": $( "#price" ).val(),
"rating": $( "#rating" ).val(),
"taste": $( "#taste" ).val(),
"image": $( "#image" ).val()};
send_message("new_wine", bottle);
$( "#dialog" ).dialog( "close" );
return false;
});

Yesterday, send_signal took 2 strings: the name of the signal and some other data. Today I changed it to take the name of the signal, and any javascript object. The function uses a popular JSON parcer to stringify the javascript object before using the "set title hack" to pass the data tot he back end.
function send_message(signal_name, data)
{
title = document.getElementsByTagName("title")[0];
message = {"signal": signal_name,"data":data};
title.innerHTML = JSON.stringify(message);
}
Now that I have written that part, I don't have to worry about formatting my data, I can just pass it over.

Similarly, on the back end, I made the HTMLWindow class decode the json and pass it along:

def _on_html_message(self, view, frame, title):
if title != "null":
try:
message = json.loads(title)
except Exception, inst:
print inst
message = {"signal":"error","data":"signal not parsed"}
else:
message = None
self.on_html_message(message["signal"],message["data"])

def on_html_message(self,message):
pass

As a result, subclasses like VeritasWidnow can just use the data without worrying about the implementation. It doesn't do anything with the data yet.

2 Way Communication
I did add one bit of round tripping. It turns out that as a security precaution, the "file" input type does not let the javascript see the full path selected, it only allows the selected file to be uploaded to the server. I hope that I can figure out how to let the user grant Veritas permissions to pass the selected file to the javascript, but I can hack around it if it doesn't turn out to be easy or possible.

Meantime, I let the user type in a full path to the file, and then click "Preview". This takes the entered string, and sends it to the back end.

$( "#preview_button" ).click(function() {
send_message("image_preview", $( "#image" ).val() );
return false;
});


The back end then uses the awesome PIL libary to make a thumbnail, and then passes the path of the thumbnail back. I actully suspect that I will be able to skip the step of saving the file and just use the string data, possible with the Canvas element.


def on_html_message(self, signal_name, data):
if signal_name == "image_preview":
try:
img = Image.open(data)
img.thumbnail((128,128), Image.ANTIALIAS)
path = """/home/rick/.tmp/thumbnail.jpg"""
img.save(path,"JPEG")
path = "file://" + path
self.view.execute_script("receive_signal('set_preview','" + path + "');")
except Exception, inst:
print inst.message
self.view.execute_script("""receive_signal('set_preview_error','Could not find a valid image at %s');""" % data)
Debugging HTML/javascript
Another handy think I found today is that I can load the HTML page into Firefox, and use a web console to poke at it. Very handy. Of course, this works now because I am not doing string replacement, but I think that I can actually make a similar thing work with a WebKit window.

Next
So now that I can collect the info from the user, I'll start saving the data in a sqlite database, and then work on presenting the data to the user.

Read more
Rick Spencer

In Vino Veritas and HTML5 Client Apps


So, basically, not to put to fine a point on it, I've started to write apps for Ubuntu in a different way, essentially, replacing Gtk (or really PyGtk) with HTML5. This is my first post about how am I doing it. I've just started a project called "Veritas" which will be a wine tasting database for my wife and I. We'll be able to enter information about each bottle that we drink, and then look at trends over time, perhaps helping us pick nicer and nicer bottles as we go.

First, though, what happened, I thought you got along great with Gtk?
Well, I do still have a soft spot in my heart for pygtk. Believe me, I've written plenty of code in it. I know the ins and outs pretty well, and I'm able to do things with it like write a response UI that doesn't block to much during run longing processes and such. PyGtk is great for building "boxy" apps, but I think a lot of people want to build slicker apps than Gtk is really designed for, or at least design them in different ways thatn Gtk supports well.

Why not Qt and QML?
This app, in fact, would be well suited for a QML app. However, I have other apps in mind, and I found QML/Qt to not be quite up to the job. For instance, I want to write a communication app to combine OpenLDAP and IRC functionality. Currently, there are no Qt libraries for LDAP or IRC, so to write such an app with QML, I'd have to write C++ Qt code to wrap whatever C libraries, and then write code to export models from that C++ code to expose it the right way in QML. That is a lot of overhead, especially considering that there are good Python libraries for LDAP, IRC, and pretty much anything desired. So, I designed myself a system that let me stick with Python for the back end code.

Also, QML lacks a widget toolkit at the moment, so there would be a lot of manual coding of things like buttons and such.

Why HTML5?
I chose HTML 5 for the widget toolkit for a few reasons.
  • I already know HTML/CSS/Javascript pretty well, and I know that cool things can be done with it. I bet a lot you all know it pretty well too.
  • Webkit is very well supported Open Source used and maintained by many large companies.
  • There are lots of cool widget toolkits to choose from, I'm currently looking at YUI since I think it's in pretty heavy use by some of the web teams at Canonical.
  • Because I wanted to try out HTML for client programming.
My Application Architecture
First, I laid everything out total flat to start with. This is because I wanted to come to grips with making the view code talk to the model/controller code without mucking with any extra complexity. Of course, I will need to modify the layout as the actual code grows.

Currently, I am only focused on makinga client application programming system, though I may, in the future, extend the system so the model/controller back end could be on a server, and the view available via a browser. But this is firmly out of scope right now. I am, however, trying to be cognizant of making the system essentially portable by sequestering the Gtk specific code into specific files that can be replaced if I want to run it without Gtk at some point.

Therefore, there are some important differences to note if you are used to web programming.
  • The back and and view code communicate via signals to each other. This is much different than web programming, where the view makes a request and waits for the server to respond with a string (for Ajax apps) or redirects to another view passing some state along with it.
  • This means that the back end can send signals to the view. The view does not need to pole to see if the back end is ready, for example, the back end can just send a signal when it is.
  • This also means that long running processes can block the GUI, since they are running in the same thread. I shall most likely put the Gtk main loop in it's own thread so I can run run-longing processes in seperate threads, and then communicate between them.
  • The view cannot call a function on the back end, and wait for a response (for example with XmlHttpRequest). Rather it can only send the back end a signal.
  • The back end is not stateless. This is greatly simplifying. Most web programming frameworks have a lot of code to maintain state by storing it and accessing it on future requests by reading cookies stored on the client.
  • Currently, I have nothing like server side tags that are the bread and butter of most web programming frameworks. This typically works via string replacement, so I could either find a library to add this functionality, or make it easier to do string replacement with the HTML. This is typically desirable for a web app since you want to configure the HTML before it is sent from the server. Less important in a client app, but still, some string replacement of HTML may save some effort in writing complex javascript against the browser DOM.
Ok, let's get to the good stuff. To bin file is called "veritas". Running this file creates a VeritasWindow and then starts the Gtk main loop. The Gtk main loop is there because the Webkit window has to run inside something, and I chose a Gtk Window for this because of the simple integration with Ubuntu.

A VeritasWindow only does 2 things so far. It tells it's baseclass "HTMLWindow" what html file to load, and it listens for signals from that view. Later, it will create new HTML5 Windows and do other stuff in response to signals from the HTML view.

HTMLWindow is meant to be used only as a base class. First, it creates a top level menu so you can quit the app, and also, I think that apps should have menus (I haven't really thought through how menus will work in this system yet, but I'm hoping that DBUS Menu helps me out). Then it loads the HTML that the subclass told it to load. It also listens for signals from the view, parses the signals and has what is essentially a virtual function called "on_html_message" for subclasses to override. You should be able to receive messages from the view without looking at the internals of how it works. Among other things, this is platform specific.

main.html is the HTML5 code for the main window. All it does now is send a signal that it is loaded, and you can see that I added a heading. When paired with main.css the layout and look and feel of the UI will be controlled completely in the view code.

helpers.js is a file that I think I may need to handle platform specific signals sent to the view. Of course, you can always call "execute_script" and send whatever you want from the backend, but I think it's cleaner to expect well formatted signals from the back end instead.

Conclusion
So, that's basically all the boiler plate for making an HTML5 client app for Ubuntu. This represents a few hours of work on my part to make a re-usable and extensible system.
My next steps will be to do some database programming with sqlite, then I'll probably build a data input window for it. This certainly calls to mind Rails-like thinking (hmmm, I have the model, why can't I generate the view from that on the fly?), but, I don't think I want anything that complex. After I finish Veritas, I'll then extract the base classes and such, and perhaps create a Quickly template, then go ahead and work on my certain to be more complex communication application.

Read more
Rick Spencer

Smoke Tests

The QA team has started a page for getting up the minute automated results from smoke testing of daily images. Check it out! They still have more smoke tests to set up, but everything is running automatically from daily builds. You can check to see if the latest build installs and if basic tests run. If they do, it's probably worth testing with that build. If it's not, then the team should be busy at work at making it work testing!

Also, this page is just on small step in the blueprint for smoke tests results page.

Read more