Canonical Voices

Posts tagged with 'ubuntu one'

beuno

This week has been bitter-sweet. On the one hand, we announced that a project many of us had poured our hearts and minds into was going to be shut down. It’s made many of us sad and some of us haven’t even figured out what to do with their files yet    :)

On the other hand, we’ve been laser-focused on making Ubuntu on phones and tablets a success, our attention has moved to making sure we have a rock-solid, scalable, secure and pleasant to use for developers and users alike. We just didn’t have the time to continue racing against other companies whose only focus is on file syncing, which was very frustrating as we saw a project we were proud of be left behind. It was hard to keep feeling proud of the service, so shutting it down felt like the right thing to do.

I am, however, very excited about open sourcing the server-side of the file syncing infrastructure. It’s a huge beast that contains many services and has scaled well into the millions of users.

We are proud of the code that is being released and in many ways we feel that the code itself was successful despite the business side of things not turning out the way we hoped for.

This will be a great opportunity to those of you who’ve been itching to have an open source service for personal cloud syncing at scale, the code comes battle-tested and with a wide array of features.

As usual, some people have taken this generous gesture “as an attempt to gain interest in a failing codebase”, which couldn’t be more wrong. The agenda here is to make Ubuntu for phones a runaway success, and in order to do that we need to double down on our efforts and focus on what matters right now.

Instead of storing away those tens of thousands of expensive man-hours of work in an internal repository somewhere, we’ve decided to share that work with the world, allow others to build on top of that work, benefit from it.

It’s hard sometimes to see some people trying to make a career out of trying to make everything that Canonical does as inherently evil, although at the end of the day what matters is making open source available to the masses. That’s what we’ve been doing for a long time and that’s the only thing that will count in the end.

 

So in the coming months we’re going to be cleaning things up a bit, trying to release the code in the best shape possible and work out the details on how to best release it to make it useful for others.

All of us who worked on this project for so many years are looking forward to sharing it and look forward to seeing many open source personal cloud syncing services blossoming from it.

Read more
Jane Silber

Today we are announcing plans to shut down the Ubuntu One file services.  This is a tough decision, particularly when our users rely so heavily on the functionality that Ubuntu One provides.  However, like any company, we want to focus our efforts on our most important strategic initiatives and ensure we are not spread too thin.

Our strategic priority for Ubuntu is making the best converged operating system for phones, tablets, desktops and more. In fact, our user experience, developer tools for apps and scopes, and commercial relationships have been constructed specifically to highlight third party content and services (as opposed to our own); this is one of our many differentiators from our competitors.  Additionally, the free storage wars aren’t a sustainable place for us to be, particularly with other services now regularly offering 25GB-50GB free storage.  If we offer a service, we want it to compete on a global scale, and for Ubuntu One to continue to do that would require more investment than we are willing to make. We choose instead to invest in making the absolute best, open platform  and to highlight the best of our partners’ services and content.

As of today, it will no longer be possible to purchase storage or music from the Ubuntu One store. The Ubuntu One file services will not be included in the upcoming Ubuntu 14.04 LTS release, and the Ubuntu One apps in older versions of Ubuntu and in the Ubuntu, Google, and Apple stores will be updated appropriately. The current services will be unavailable from 1 June 2014; user content will remain available for download until 31 July, at which time it will be deleted.

We will work to ensure that customers have an easy path to download all their content from Ubuntu One to migrate to other personal cloud services.  Additionally, we continue to believe in the Ubuntu One file services, the quality of the code, and the user experience, so will release the code as open source software to give others an opportunity to build on this code to create an open source file syncing platform.

Customers who have an active annual subscription will have their unused fees refunded. We will calculate the refund amount from today’s announcement, even though the service will remain available until 1 June and data available for a further two months.

We will contact customers separately with additional information about what to expect.  We will also publish further blog posts with advice on how to download content and with details on the open sourcing of the code.

The shutdown will not affect the Ubuntu One single sign on service, the Ubuntu One payment service, or the backend U1DB database service.

We’ve always been inspired by the support, feedback and enthusiasm of our users and want to thank you for the support you’ve shown for Ubuntu One. We hope that you’ll continue to support us as together we bring a revolutionary experience to new devices.

UPDATE:  See this post for updated information on downloading all your content from Ubuntu One.  We are aware that in some rare cases (large amount of content or very large number of files), the bulk download to a single archive is failing. Don’t worry – your content is not lost and we’ll post an updated bulk download tool which generates multiple archives rather than a single large one. We know of no issues with the other options discussed in that post.

 

Read more
Cristian Parrino

Today we are announcing plans to shut down the Ubuntu One file services.  This is a tough decision, particularly when our users rely so heavily on the functionality that Ubuntu One provides.  However, like any company, we want to focus our efforts on our most important strategic initiatives and ensure we are not spread too thin.

Our strategic priority for Ubuntu is making the best converged operating system for phones, tablets, desktops and more. In fact, our user experience, developer tools for apps and scopes, and commercial relationships have been constructed specifically to highlight third party content and services (as opposed to our own); this is one of our many differentiators from our competitors.  Additionally, the free storage wars aren’t a sustainable place for us to be, particularly with other services now regularly offering 25GB-50GB free storage.  If we offer a service, we want it to compete on a global scale, and for Ubuntu One to continue to do that would require more investment than we are willing to make. We choose instead to invest in making the absolute best, open platform  and to highlight the best of our partners’ services and content.

As of today, it will no longer be possible to purchase storage or music from the Ubuntu One store. The Ubuntu One file services will not be included in the upcoming Ubuntu 14.04 LTS release, and the Ubuntu One apps in older versions of Ubuntu and in the Ubuntu, Google, and Apple stores will be updated appropriately. The current services will be unavailable from 1 June 2014; user content will remain available for download until 31 July, at which time it will be deleted.

We will work to ensure that customers have an easy path to download all their content from Ubuntu One to migrate to other personal cloud services.  Additionally, we continue to believe in the Ubuntu One file services, the quality of the code, and the user experience, so will release the code as open source software to give others an opportunity to build on this code to create an open source file syncing platform.

Customers who have an active annual subscription will have their unused fees refunded. We will calculate the refund amount from today’s announcement, even though the service will remain available until 1 June and data available for a further two months.

We will contact customers separately with additional information about what to expect.  We will also publish further blog posts with advice on how to download content and with details on the open sourcing of the code.

The shutdown will not affect the Ubuntu One single sign on service, the Ubuntu One payment service, or the backend U1DB database service.

We’ve always been inspired by the support, feedback and enthusiasm of our users and want to thank you for the support you’ve shown for Ubuntu One. We hope that you’ll continue to support us as together we bring a revolutionary experience to new devices.

 

Read more
login.ubuntu.com-id-hY4GFhr

Ubuntu Phone OS integrates with Orange and Deutsche Telekom in GSMA OneAPI initiative

Mobile World Congress kicks off today and we’re gearing up to show off Ubuntu running on multiple devices. We’ll be demonstrating phones, tablets and desktops at the stand, have Ubuntu developers flashing spare hardware, as well as be showing integration and interoperability with Orange and Deutsche Telekom through the GSMA’s One API initiative.

GSMA’s OneAPI initiative aims to provide application programming interfaces (APIs) that enable applications to exploit mobile network capabilities, such as messaging, authentication, payments and location-finding with a cross-operator reach. For example, a payment network API could be used to add an in-app purchase directly to the user’s mobile phone bill.

Ubuntu is the first smartphone operating system to be able to demonstrate integration and interoperability with a carrier’s authentication and billing systems. Working with Deutsche Telekom and Orange, we’ll show how a single API can be used to instantly log users in with their operator identity and seamlessly link that with Ubuntu One, Ubuntu’s identity and payments services, and provide carrier billing options upon purchase of music and eventually, apps.

This is a massive step forward for the industry as the GSMA and partners such as Canonical, are spearheading an initiative to standardise access to operator facilities via network APIs across all operators. The initiative will benefit operators, developers and consumers:

  • It puts operators in a position to forge stronger relationships with their customers.
  • For developers, OneAPI reduces the time and effort needed to create applications for and content that is portable across mobile operators, increasing reach and ultimately enhancing the consumer experience.
  • For consumers, it makes it really quick and easy to make application purchases directly from their phone. It’s also more secure because it’s not necessary to input credit card details for each purchase.

Also at Mobile World Congress:

  • Mark Shuttleworth, founder of Ubuntu, will participate in a keynote panel discussion alongside Mozilla and Tizen on Tuesday 26th Feb at 18.00 at the MWC Conference Auditorium and broadcast live on Mobile World Live
  • We’ll be taking part in the App Developer Day on Tuesday 26th Feb. Stuart Langridge, technical architect at Canonical will be presenting the Ubuntu phone, SDK, HTML5 and native apps as well as discussing app development for Ubuntu on phones and tablets. We’ll also have engineers available at the event to flash spare handsets with Touch Developer Preview of Ubuntu. This will take place from 9.00-9.30 and 11.40-11.55, and 13.30-14.00 in Hall 8.0, Theatre A.
  • The GSMA Seminar on “Unlocking Value with Network APIs” will run on Thursday 28th from 9am to 10.30 am in Room CC1.1. Canonical’s Stuart Langridge will present and demo the Ubuntu Phone during the session. We’ll also be demonstrating Ubuntu’s OneAPI solution at the GSMA stand daily.
  • Look out for Ubuntu engineers who will flash spare hardware with developer images for phone and tablet throughout the show close to the Ubuntu stand.

Read more
James Henstridge

One of the projects I’ve been working on has been to improve aspects of the Ubuntu One Developer Documentation web site.  While there are still some layout problems we are working on, it is now in a state where it is a lot easier for us to update.

I have been working on updating our authentication/authorisation documentation and revising some of the file storage documentation (the API used by the mobile Ubuntu One clients).  To help verify that the documentation was useful, I wrote a small program to exercise those APIs.  The result is u1ftp: a program that exposes a user’s files via an FTP daemon running on localhost.  In conjunction with the OS file manager or a dedicated FTP client, this can be used to conveniently access your files on a system without the full Ubuntu One client installed.

You can download the program from:

https://launchpad.net/u1ftp/trunk/0.1/+download/u1ftp-0.1.zip

To make it easy to run on as many systems as possible, I packaged it up as a runnable zip file so can be run directly by the Python interpreter.  As well as a Python interpreter, you will need the following installed to run it:

  • On Linux systems, either the gnomekeyring extension (if you are using a GNOME derived desktop), or PyKDE4 (if you have a KDE derived desktop).
  • On Windows, you will need pywin32.
  • On MacOS X, you shouldn’t need any additional modules.

These could not be included in the zip file because they are extension modules rather than pure Python.

Once you’ve downloaded the program, you can run it with the following command:

python u1ftp-0.1.zip

This will start the FTP server listening at ftp://localhost:2121/.  Pointing a file manager at that URL should prompt you to log in, where you can use your standard Ubuntu One credentials and start browsing your files.  It will verify the credentials against the Ubuntu SSO service and issue an OAuth token that it stores in the keyring.  The OAuth token is then used to authenticate requests to the file storage REST API.

While I expect this program to be useful on its own, it was also intended to act as an example of how the Ubuntu One API can be used.  One way to browse the source is to simply unzip the package and poke around.  Alternatively, you can check out the source directly from Launchpad:

bzr branch lp:u1ftp

If you come up with an interesting extension to u1ftp, feel free to upload your changes as a branch on Launchpad.

Read more
mandel

Recently a very interesting bug has been reported agains Ubuntu One on Windows. Apparently we try to sync a number of system folders that are present on Windows 7 to be backward compatible.

The problem

The actual problem in the code is that we are using os.listdir. While lisdir on python does return system folders (at the end of the day, they are there) os.walk does not, for example, lets imaging hat we have the following scenario:

Documents
    My Pictures (System folder)
    My Videos (System folder)
    Random dir
    Random Text.txt

If we run os.listdir we would have the following:

import os
>> os.listdir('Documents')
['My Pictures', 'My Videos', 'Random dir', 'Random Text.txt']

While if we use os.walk we have:

import os
path, dirs, files = os.walk('Documents')
print dirs
>> ['Random dir']
print files
>> ['Random Text.txt']

The fix is very simple, simply filter the result from os.listdir using the following function:

import win32file
 
INVALID_FILE_ATTRIBUTES = -1
 
 
def is_system_path(path):
    """Return if the function is a system path."""
    attrs = win32file.GetFileAttributesW(path)
    if attrs == INVALID_FILE_ATTRIBUTES:
        return False
    return win32file.FILE_ATTRIBUTE_SYSTEM & attrs ==\
        win32file.FILE_ATTRIBUTE_SYSTEM

File system events

An interesting question to ask after the above is, how does ReadDirectoryChangesW work with systen directories? Well, thankfully it works correctly. What does that mean? Well, it means the following:

  • Changes in the system folders do not get notified.
  • Moves from a watch directory to a system folder is not a MOVE_TO, MOVE_FROM couple but a FILE_DELETED

The above means that if you have a system folder in a watch path you do not need to worry since the events will work correctly, which are very very good news.

Read more
mandel

On Ubuntu One we use BtiRock to create the packages for Windows. One of the new features I’m working on is to check if there are updates every so often so that the user gets notified. This code on Ubuntu is not needed because the Update Manger takes care of that, but when you work in an inferior OS…

Generate the auto-update.exe

In order to check for updates we use the generated auto-update.exe wizard from BitRock. Generating the wizard is very straight forward first, as with most of the BitRock stuff, we generate the XML to configure the generated .exe.

<autoUpdateProject>
    <fullName>Ubuntu One</fullName>
    <shortName>ubuntuone</shortName>
    <vendor>Canonical</vendor>
    <version>201</version>
    <singleInstanceCheck>1</singleInstanceCheck>
    <requireInstallationByRootUser>0</requireInstallationByRootUser>
    <requestedExecutionLevel>asInvoker</requestedExecutionLevel>
</autoUpdateProject>

There is just a single thing that is worth mentioning about the above XML. The requireInstallationByRootUser is true because we use the generated .exe to check if there are updates present and we do not what the user to have to be root for that, it does not make sense. Once you have the above or similar XML you can execute:

{$bitrock_installation$}\autoupdate\bin\customize.exe" build ubuntuone_autoupdate.xml windows

Which generates the .exe (the output is in ~\Documents\AutoUpdate\output).

Putting it together with Twisted

The following code provides an example of a couple of functions that can be used by the application, first to check for an update, and to perform the actual update.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import os
import sys
 
# Avoid pylint error on Linux
# pylint: disable=F0401
import win32api
# pylint: enable=F0401
 
from twisted.internet import defer
from twisted.internet.utils import getProcessValue
 
AUTOUPDATE_EXE_NAME = 'autoupdate-windows.exe'
 
def _get_update_path():
    """Return the path in which the autoupdate command is found."""
    if hasattr(sys, "frozen"):
        exec_path = os.path.abspath(sys.executable)
    else:
        exec_path = os.path.dirname(__file__)
    folder = os.path.dirname(exec_path)
    update_path = os.path.join(folder, AUTOUPDATE_EXE_NAME)
    if os.path.exists(update_path):
        return update_path
    return None
 
 
@defer.inlineCallbacks
def are_updates_present(logger):
    """Return if there are updates for Ubuntu One."""
    update_path = _get_update_path()
    logger.debug('Update path %s', update_path)
    if update_path is not None:
        # If there is an update present we will get 0 and other number
        # otherwise
        retcode = yield getProcessValue(update_path, args=('--mode',
            'unattended'), path=os.path.dirname(update_path))
        logger.debug('Return code %s', retcode)
        if retcode == 0:
            logger.debug('Returning True')
            defer.returnValue(True)
    logger.debug('Returning False')
    defer.returnValue(False)
 
 
def perform_update():
    """Spawn the autoupdate process and call the stop function."""
    update_path = _get_update_path()
    if update_path is not None:
        # lets call the updater with the commands that are required,
        win32api.ShellExecute(None, 'runas',
            update_path,
            '--unattendedmodeui none', '', 0)

With the above you should be able to easily update the installation of your frozen python app on Windows when using BitRock.

Read more
mandel

Following my last post regarding how to list all installed applications using python here is the code that one will require to remove an installed msi from a Windows machine using python.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class MsiException(Exception):
    """Raised when there are issues with the msi actions."""
 
 
def uninstall_product(uid):
    """Will remove the old beta from the users machine."""
    # we use the msi lib to be able to uninstall apps
    property_name = u'LocalPackage'
    uninstall_path = get_property_for_product(uid, property_name)
    if uninstall_path is not None:
        # lets remove the package.
        command_line = u'REMOVE=ALL'
        result = windll.msi.MsiInstallProductW(uninstall_path, command_line)
        if result != ERROR_SUCCESS:
            raise MsiException('Could not remove product %s' % uid)

The missing functions can be found in the last post about the topic.

Read more
David

Another edition of the Ubuntu App Developer Week and another amazing knowledge sharing fest around everything related to application development in Ubuntu. Brought to you by a range of the best experts in the field, here’s just a sample of the topics they talked about: App Developer Strategy, Bazaar, Bazaar Explorer, Launchpad, Python, Internationalization, Launchpad Translations, Unity, Unity 2D, Gedit Developer Plugins, the MyApps Portal, the App Review Board, the UbuntuSoftware Centre, Unity Mail, Launchpad Daily Builds, Ubuntu One APIs, Rapid App Development, Quickly, GooCanvas, PyGame, Unity Launcher, Vala, the App Developer Site, Indicators, Python Desktop Integration, Libgrip, Multitouch, Unity Lenses, Ubuntu One Files Integration, The Business Side of Apps, Go, Qt Quick… and more. Oh my!

And a pick of what they had to say:

We believe that to get Ubuntu from 20 million to 200 million users, we need more and better apps on Ubuntu
Jonathan Lange on making Ubuntu a target for app developers

Bazaar is the world’s finest revision control system
Jonathan Riddell on Bazaar

So you’ve got your stuff, wherever you are, whichever device you’re on
Stuart Langridge on Ubuntu One

Oneiric’s EOG and Evince will be gesture-enabled out of the box
Jussi Pakkanen on multitouch in Ubuntu 11.10

I control the upper right corner of your screen ;-)
Ted Gould on Indicators

If you happened to miss any of the sessions, you’ll find the logs for all of them on the Ubuntu App Developer Week page, and the summaries for each day on the links below:

Ubuntu App Developer Week – Day 5 Summary

The last day came with a surprise: an extra session for all of those who wanted to know more about Qt Quick and QML. Here are the summaries:

Getting A Grip on Your Apps: Multitouch on GTK apps using Libgrip

By Jussi Pakkanen

In his session, Jussi talked about one of the most interesting technologies where Ubuntu is leading the way in the open source world: multitouch. Walking the audience through the Grip Tutorial, he described how to add gesture support to existing applications based on GTK+ 3. He chose to focus on the higher layer of the uTouch stack, where he explained the concepts on which libgrip, the gesture library, is built upon, such as device types and subscriptions. After having explored in detail the code examples, he then revealed that in Oneiric Eye Of GNOME and Evince, Ubuntu’s default image viewer and default PDF reader, will be gesture-enabled.

Check out the session log.

Creating a Google Docs Lens

By Neil Patel

Neil introduced his session explaining the background behind Lenses: a re-architecture effort of the now superseded Places concept to make them more powerful, provide more features and make it easier to add features through a re-engineered API. Lenses create its own instance, add categories, filters and leave the searching to Scopes. The Lenses/Scopes pairs are purely requests for data, independent of the type of UI, and being provided by the libunity library, they can be written in any of the programming languages supported by GObject Introspection (Python, Javascript, C/C++, Vala, etc.). To illustrate all of this concepts, Neil devoted the rest of the session to a real example of creating a Lens for Google Docs.

Check out the session log.

Practical Ubuntu One Files Integration

By Michael Terry

Another hands-on session from Michael, with a real world example on how to supercharge apps with cloud support. Using his experience in integrating the Ubuntu One Files API to Deja Dup, the default backup application in Ubuntu, he went in detail through the code of a simple program to talk to a user’s personal Ubuntu One file storage area. We liked Michael’s session so much that it will very soon be featured as a tutorial on developer.ubuntu.com!

Check out the session log and Michael’s awesome notes.

Publishing Your Apps in the Software Center: The Business Side

By John Pugh

Ubuntu directly benefits from Canonical becoming a sustainable business to support its development, and that’s exactly what John talked about. Being responsible for business development in the Ubuntu Software Centre, he’s got a privileged  insight on how to make it happen. He started off explaining that the main goal is to present Ubuntu users with a large catalog of apps available for purchase, and then continued concentrating on how to submit paid applications to be published in the Software Centre. A simple 5-step process, the behind-the-scenes work can be summarized in: Canonical helps packaging the app, it hosts the app and provides the payment via pay.ubuntu.com, in a 80%/20% split. Other highlights include the facts that only non-DRM, non-licensed apps cannot be submitted right now, but there is ongoing work to implement license key support, and that MyApps, the online app submission portal, can take any nearly any content: apps with adverts, “free” online game clients and HTML5 apps.

Check out the session log.

Writing an App with Go

By Gustavo Niemeyer

Gustavo’s enthusiasm for Go, the new programming language created by Google shows every time you start a conversation with him on that topic. And it showed as well on this session, in which he created yet another “Hello world” application in a new language -you guessed-: Go. Along the way, he had time to describe all of the features of this new addition of the extensive family of programming languages: statically compiled with good reflection capabilities, structural typing, interfaces and more.

Check out the session log.

Qt Quick At A Pace

By Donald Carr

Closing the week on the last -and surprise- session, we had the luxury of having Donald, from the Nokia Qt team, the makers of Qt itself, to talk about Qt Quick. Using a clear and concise definition, Qt Quick is an umbrella term used to refer to QML and its associated tooling; QML being a declarative markup language with tight bindings to Javascript. A technology equally suited to mobile or to the desktop, QML enables developers to rapidly create animation-rich, pixmap-oriented UIs. Through the qtmediahub and Qt tutorial examples, he explored QML’s capabilities and offered good practices for succesfully developing QML-based projects.

Check out the session log.

Wrapping Up

Finally, if you’ve got any feedback on UADW, on how to make it better, things you enjoyed or things you believe should be improved, your comments will be very appreciated and useful to tailor this event to your needs.

Thanks a lot for participating. I hope you enjoyed it  as much as I did, and see you again in 6 months time for another week full with app development goodness!


Read more
David

Ubuntu App Developer Week – Day 3 Summary

Time flies and we’re already halfway through UADW, but there is still much to come! Here’s yesterday report for your reading pleasure:

Unity Mail: Webmail Notification on Your Desktop

By Dmitry Shachnev

Starting off with a description of the features of Unity Mail, such as displaying webmail unread message count, notifications and mail subjects, we then learned more about how it was developed and the technologies that were used to create it. It’s written in Python, using GObject introspection (PyGI) and integrates with Ubuntu through the Unity, Notify and Indicate modules. After describing each one in more detail, Dmitry continued talking about how the app can be translated using Launchpad, and how he uses the Bazaar  source revision control system to work with code history. Wrapping up, he went through the plans for the future: more configuration options, marking all messages as read and the need for a new icon. Any takers? ;)

Check out the session log here.

Launchpad Daily Builds and Rapid Feedback: Writing Recipe Builds

By Jelmer Vernooij

Assuming some previous knowledge on Debian packaging, in his session Jelmer walked the audience through a practical example of a basic recipe build for a small project: pydoctor. Drawing the cooking recipe analogy, package recipes are a description of the ingredients (source code branches) and how to put them together, ending up with a delicious Debian package for users to enjoy. Launchpad can build packages from recipes once or automatically on a daily basis provided the code has changed, conveniently placing the result in a PPA. In the last part of the session, he described in detail the contents of an existing recipe and added some notes on best practices when building from a recipe.

Check out the session log here.

Using the Ubuntu One APIs for Your Apps: An Overview

By Stuart Langridge

The idea bahind the Ubuntu One developer programme is to make it easy to add the cloud to your apps and make new apps for the cloud. With this opening line, Stuart delivered a talk about a high-level overview on the cool things you can do as an app developer adding Ubuntu One support. One aspect it data: for example building applications that work on the desktop, on mobile phones and on the web, securely sharing data among users. Another is music: streaming, streaming music and sharing playlists on the desktop, on mobile and from the web, all through a simple REST HTTP API. He also mentioned some examples of cloud enabled applications: Shutter and Deja-Dup, and many other interesting ways to use Ubuntu One to do exciting thigs with data. And you can get started already using the available documentation.

Check out the session log here.

Supercharging Your Apps with Unity Launcher Integration

By Jason Smith

In his talk, Jason first went through the terminology that covers the elements related to the Unity Launcher, and the bachground behind the Launcher API, implemented in the libunity library. Libunity can be used in many programming languages: Python, C, Vala and others supported by GObject Introspection. Going through what you can do with the Launcher (marking/unmarking apps as urgent, setting object counts, setting progress on objects and adding quicklist menu items to the object), he used Vala snippets to illustrate each feature with code.

Check out the session log here.

Hello Vala: An Introduction to the Vala Language

By Luca Bruno

Vala, a new programming language with C#-like syntax that compiles to C and targets the GObject type system: with a clear statement of what Vala is and what it can do, Luca, a contributor to the project introduced one by one the mostkey features of the language through his “Hello world” example: namespaces, types, classes, properties, keywords and more. As a highlight he mentioned Vala’s automatic memory management using reference counting, andits interoperability with other languages, most notably C, but it can also work with many others supported by GObject Introspection. Other cool featuresto note were also error handling on top of GError, support for async operations, closures and DBus client/server, on each of which he elaborated before finishing the session.

Check out the session log here.

The Day Ahead: Upcoming Sessions for Day 3

Another day, another awesome set of sessions coming up:

16.00 UTCCreating an App Developer Website: developer.ubuntu.com

Ubuntu 11.10 will not only bring new features to the OS itself. In time for the release we’ll be launching the new Ubuntu App Developer site, a place for developers to find all the infromation and the resources they need to get started creating, submitting and publishing their apps in Ubuntu. John Oxton, David Planella and many other people have worked to make the next developer.ubuntu.com possible and will tell you all about it.

17:00 UTCRapid App Development with Quickly

Quickly is a wrapper that pulls together all the recommended tools and technologies to bring apps from creation and through their whole life cycle in Ubuntu. With an easy set of commands that hide all the complexity for your, it effectively enables developers to follow rapid development principles and worry only about writing code. Michael Terry, from the Quickly development team will be looking forward to guide you through the first steps with this awesome tool.

18:00 UTCDeveloping with Freeform Design Surfaces: GooCanvas and PyGame

Have you ever wondered what freeform design surfaces, or canvases are? You probably have now. Well, lucky you then, because Rick Spencer will be here to tell you what they’re good for and how to get started with them ;)

19:00 UTCMaking your app appear in the Indicators

In another session on how to integrate with the platform, Ted Gould, the man who knows most about them, will describe how to add indicator features  to your apps, both in terms of panel indicators and messaging menu support.

20:00 UTC – Will it Blend? Python Libraries for Desktop Integration

You certainly will want your app to have that familiar look and feel at home in the OS it’s running on, but you’ll also want it to use all the backend technologies to integrate even deeper and provide a great user experience. Well, fear not, for Marcelo Hashimoto is here to tell you exactly how to do that!

Looking forward to seeing you all there in a few hours!


Read more
mandel

In a few days (well, if I find some kind person to take care of Iron) I will be attending the Ubuntu One Developer evening in which we should be able to hear Stuart will be talking about the bunch of crazy ideas he has for developers to use or infrastructure to do cool stuff. I’ll be there for two reason:

  • Hear what Stuart has been planning. I’ve got to recognized I should know a lot more of the Ubuntu One developer program but unfortunately I have been in the working in the Windows port too much and I have ignored the rests of the teams… mea culpa :( .
  • Learn a few more things of the APIs so that I can find my little Chrome extension that uses Ubuntu One (no, it is not bookmark sync, I cannot be less interested in that!).
  • See a bunch of developers and learn about there ideas and what they are doing.
  • Drinks, drinks, drinks! I’m a University of Manchester alumni and a bloody miss Manchester, I don’t care what people say, it is a great city.

If you are going to be in Manchester you should join us, the event is FREE and trust me Stuart is a great guy to go out for drinks with (I’m not bad either, but I always get in trouble :P ).

Read more
beuno

After a long and interesting journey, today we've released Ubuntu One Files for Android.

The app started being developed by Micha? Karnicki as a Google Summer of Code project, and he did such a fantastic job at it that we hired him on full time and teamed him up Chad Miller to end up releasing a fantastically polished app. It got immediately featured in the press!
It was built on top of our public APIs, documented here: https://one.ubuntu.com/developer/

Besides it letting you access all your files stored in Ubuntu One, it has a very cool feature to auto-sync all the pictures on your phone, having an instant backup of them, and a convenient place to share them!

I'm super proud of the work we put out.

Also, as with all the rest of our clients, it's open source and you can get it in Launchpad

Read more
mandel

At Ubuntu One we required to be able to use named pipes on windows for IPC. This is a ver normal process in multi-process applications like the one we are going to provide, but in our case we had a twist, we are using twisted. As some of you may know there is not default reactor that would allow you to write a protocol in twisted and allows to use named pipes as the transport of the protocol. Well this was until very recently.

Txnamedpipes (lp:txnamedpipes) is a project that provides a ICOP based reactor that allows to use namedpipes for the transport of your protocol. At the moment we are confident that the implementation would allow you to use spred.pb or a custom protocol on twisted 10 and later on Windows 7 (we have been able to find a number of issues on Windows XP). The following is a small example of a spread.pb service and client that uses a named pipe for communication.

from txnamedpipes.reactor import install
install()
from twisted.spread import pb
from twisted.internet import reactor
 
class Echoer(pb.Root):
    def remote_echo(self, st):
        print 'echoing:', st
        return st
 
if __name__ == '__main__':
    reactor.listenPipe('\\\\.\\pipe\\test_pipe',
                               pb.PBServerFactory(Echoer()))
    reactor.run()
from txnamedpipes.reactor import install
install()
from twisted.spread import pb
from twisted.internet import reactor
from twisted.python import util
 
factory = pb.PBClientFactory()
reactor.connectPipe('\\\\.\\pipe\\test_pipe', factory)
d = factory.getRootObject()
d.addCallback(lambda object: object.callRemote("echo", 
                      "hello network"))
d.addCallback(lambda echo: 'server echoed: '+echo)
d.addErrback(lambda reason: 'error: '+str(reason.value))
d.addCallback(util.println)
d.addCallback(lambda _: reactor.stop())
reactor.run()

The code has the MIT license and we hope that other people find it useful.

Read more
beuno

A very healthy and civilised session about switching to Thunderbird by default just ended here in the Ubuntu Developer Summit, and the outcome was that if the Thunderbird developers manage to do some needed work (to be defined) by a certain time in our cycle (to be defined), we will ship Oneiric and more importantly, the 12.04 LTS with Thunderbird by default.

The bits I can remember that need to be done are:
- Evolution data server integration
- Tighter integration with Unity
- Shrink the size of the overall application so it fits in the CD
- A good upgrade story
- Migration plan for Evolution users

We will also make sure it ships with integration with contacts in Ubuntu One, thanks to James Tait's head start with the Hedera project.

I'm a big fan of Thunderbird, so I'll be doing my best to help them achieve their goals  :)

Read more
mandel

At the moment some of the tests (and I cannot point out which ones) of ubuntuone-client fail when they are ran on Windows. The reason for this is due to the way in which we get the notifications out of the file system and the way the tests are written. Before I blame the OS or the tests, let me explain a number of facts about the Windows filesystem and the possible ways to interact with it.

To be able to get file system changes from the OS the Win32 API provides the following:

SHChangedNotifyRegister

This function was broken up to Vista when it was fixed, Unfortunately AFAIK we also support Windows XP which means that we cannot trust this function. On top of that taking this path means that we can have a performance issue. Because the function is build on top of Windows messages, if too many changes occur the sync daemon would start receiving roll up messages that just state that something changed and it would be up to the sync daemon to decide what really happened. Therefore we can all agree that this is a no no, right?

FindFirstChangeNotification

This is a really easy function to use which is based on ReadDirectoryChangesW (I think is a simple wrapper around it) that lets you know that something changed but gives no information about what changed. Because if is based on ReadDirectoryChangesW it suffers from the same issues.

ReadDirectoryChangesW

This is by far the most common way to get the notification changes from the system. Now, in theory there are two possible cases which can go wrong that would affect the events raised by this function:

  1. There are too many events and the buffer gets overloaded and we start loosing events. A simple way to solve this issues is to process the events in a diff thread asap so that we can keep reading the changes.
  2. We use the sync version of the function which means that we could have the following issues:
    • Blue screen of death because we used too much memory from the kernel space.
    • We cannot close the handles used to watch the changes in the directories. This makes the threads to end up blocked.

As I mentioned this is the theory and therefore makes perfect sense to choose this option as the way to get notified by the changes until… you hit a great little feature of Windows called write-behind caching. The idea of write-behind caching is the following one:

When you attempt to write a new file on your HD Windows does not directly modify the HD. Instead it makes a not of the fact that your intention is to write on disk and saves your changes in memory. Ins’t that smart?

Well, that lovely feature does come set as default AFAIK from XP onwards. Any smart person would wonder how does that interact with FindFirstChangeNotification/ReadDirectoryChangesW, well after some work here is what I have managed to find out:

The IO Manager (internal to the kernel) is queueing up disk-write requests in an internal buffer, and the actual changes are not physically committed until some condition is met which I believe is for the “write-behind caching” feature. The problem appears to be that the user-space callback via FileSystemWatcher/ReadDirectoryChanges does not occur when disk-write requests are inserted into the queue, but rather occurs when they are leaving the queue and being physically committed to disk. For what I have been able to manage through observation, the life time of a queue is based on:

  1. Whether more writes are being inserted in the q.
  2. Is another app request a read from an item in the q.

This means that when using FileSystemWatcher/ReadDirectoryChanges the events are fired only when the changes are actually committed and as for a user-space program this follows a non-deterministic process (insert spanish swearing here). a way to work around this issue is to use the FluxhFileBuffers function on the volume, which does need admin rights, yeah!

Change Journal records

Well, this allows to track the changes that have been committed in an NTFS system (that means that we do not have support to FAT). This technique allows to keep track of the changes using an update sequence number that keeps track of changes in an interesting manner. At first look, although parsing the data is hard, this solution seems to be very similar to the one used by pyinotify and therefore someone will say, hey, let just ell twisted to do a select on that file and read the changes. Well, no, is not that easy, files do not provide the functionality used for select, just sockets (http://msdn.microsoft.com/en-us/library/aa363803%28VS.85%29.aspx) /me jumps of happiness

File system filterr

Well, this is an easy one to summarize, you have to write a driver like piece of code. Means C, COM and being able to crash the entire system with a nice blue screen (although I can change the color to aubergine before we crash)

Conclusion

At this point I hope I have convinced a few to believe that ReadDirectoryChangesW is the best option to take but might be wondering why I mentioned the write-behind caching feature, well here comes my complain towards the tests. We do use the real file system notifications for testing and the trial test cases do have a timeout! Those two facts plus the lovely write-behind caching feature mean that the tests on Windows fail just because the bloody evens are not raise until the leave the q from the IO manager.

Read more
beuno

In the last few months, I've been lucky enough to be able to hire some exceptional people that were contributing to Ubuntu One in their free time. Every time someone comes in from the community, filled with excitement about being able to work on their pet project full time my job gets that much better.
So, everyone say hello to James Tait and Micha? Karnicki!

Now we're looking for a new team member to help us make the Ubuntu One website awesome. Someone who knows CSS and HTML inside out, cares deeply about doing things the best way possible and is passionate about their work.

If you're interested or know anyone who may, the job posting is up on Canonical's website.

Read more
David

Ubuntu App Developer Week – Day 3 Summary

Right into the middle of the week and still delivering the most diverse set of sessions from the most interesting technologies. QML, Cloud, D-Bus, Multitouch, Unity, Bazaar… Wednesday had a bit of everything. Most importantly, this sessions are for you all, so I was really glad to hear feedback on how people liked the content of App Developer Week! So here’s a new summary for all of those who couldn’t attend.

Qt Quick: QML the Language

By Jürgen Bocklage-Ryannel

In his first session, Jürgen gave a short intro to Qt Quick’s QML language and how to use it. The first steps were to install Qt and Qt Creator, followed by a description of what Qt Quick is and how developers came up with a declarative way, similar to CSS or JSON to write in the language. All that clear, he then started with the Qt Quick tutorial and code examples that could be run with qmlviewer, the qml interpreter. Onto the second part, he focused on the QML languate, and going into the detail on how to create custom QML components. There were also lots of pointers to the excellent Qt documentation.

Check out the session log here.

Make your applications work in the cloud with Ubuntu One

By Stuart Langridge

Stuart gave a great overview on how to add the cloud to existing apps and how to make new apps for the cloud, letting Ubuntu One do all the hard work for you: from managing identities, password renewal to sharing data between applications. And all that on the web, the desktop, mobile… all your stuff everywhere! He then showed us some simple code to sync playlists on the cloud, ready for streaming. File sync is also an important Ubuntu One feature apps can make use of for sharing, and he also went through a couple of the many cool ways you can use it. The last mention was on API documentation, something Stuart is working on in this cycle.

Check out the session log here.

Take control of your desktop easily with DBus

By Alejandro J. Cura

In this session Alejandro showed us in a hands-on and easy to follow way different bits and pieces of D-Bus, and how applications in the desktop can communicate through it. He went through real life examples to show how to do simple tasks and explained how they can be achieved with D-Bus.

Check out the session log here.

Touchégg: Bringing Multitouch Gestures to your Desktop

In the second multitouch session of the week, app developer José Expósito started showcasing Touchégg, how it works and its features: recognizing multitouch gestures and getting the most of multitouch devices. He then went on describing which gestures it supports, such as tap, drag, pinch or tap & hold, and the different actions that can be associated to gestures, showing us a really cool video of Touchégg in action. The second part of the talk focused on describing the technologies used to develop Touchégg: uTouch-GEIS, through its simplified interface, and Qt.

By José Expósito

Check out the session log here.

Unity: Integrating with Launcher and Places

By Mikkel Kamstrup Erlandsen

Mikkel used the intro of the talk to set a couple of things straight: “Places” are going to be called “Lenses” in the next cycle, and libunity does not yet guarantee API or ABI stability. He then followed with the Unity Launcher integration, and how applications can use static quicklists, and more advanced features such as count, progress bar, window flashing and dynamic quicklists. The second part were Places: remote databases that provide data for Unity to render. Through a Python code example he showed us in detail all the aspects of creating a Unity Place.

Check out the session log here.

Tracking Source Code History with Bazaar

By Jelmer Vernooij

Jelmer, in his experience of seasoned Bazaar hacker started off introducing what bzr is: a modern distributed version control system. He then went on with the basics with a hands-on example, going through the creation of a branch, the first commit, and describing several of the most handy bzr commands. As a wrap-up, he showcased more advanced features such as source recipes: scripts that combine branches and build daily Debian packages from them.

Check out the session log here.

The Day Ahead: Upcoming Sessions for Day 4

We’re featuring a Qt Quick Marathon today: 2 sessions in a row. Following that, how to do RAD with yet another framework: Quickly, how to get your applications in Ubuntu, and how to get them translated in Launchpad. Enjoy!

16:00 UTC
Qt Quick: Elements/Animations/States – Jürgen Bocklage-Ryannel
Another day and more featured Qt content: this time Jürgen will take us through different elements/animations and states Qt Quick provides, and will show us through examples how to make use of them.

17:00 UTC
Qt Quick: Rapid Prototyping – Jürgen Bocklage-Ryannel
If one session weren’t enough, here’s the continuation: more Qt goodness, this time a hands-on session to develop a small application from start to finish and experience the whole process from the front row.

18:00 UTC
Rapid App Development with QuicklyMichael Terry
Mike will show you how to write applications in no time with the power of Python and Quickly: bringing back the fun in programming.

19:00 UTC
Getting Your App in the Distro: the Application Review ProcessAllison Randal
A while back we created an easy process defining how to get applications into Ubuntu, so in order to be able to add them in a matter of weeks, rather than waiting for the next release. Allison, in her Ubuntu Technical Architect and Application Review Board member hat, will walk you through the Application Review Process

20:00 UTC
Adding Indicator Support to your AppsTed Gould
Join the man who knows most about indicators in a session that will teach you how to integrate your application even more into Ubuntu. They’re slick, robust and consistent: bringing indicator support to your apps.

21:00 UTC
Using Launchpad to get your application translatedHenning Eggers
One of the coolest features of Launchpad is that it helps growing a translation community around your project. You can make your application translatable in Launchpad and be able to deliver it into almost any language. Henning will teach you how to do this, picking up where the previous session on translations left.

Looking forward to seeing you all there!


Read more
beuno

I have to confess, after I heard I found out we where shipping Unity in Ubuntu by default I was nervous. I got asked many times what my feelings were, and I think I generally dodged the question. This was a pretty risky move, which we are still a few months away from finding out how well the risk pays off.
Given that a lot of the design behind Unity wasn't done in the open and hadn't had a long time to mature, I've been sceptical of whether we  (as in, the Ubuntu project) could pull of such a massive change in a such a short period of time, and still have happy users.

I've been using Unity on and off on my netbook (which is my secondary computer), but while enjoying a long weekend I've spent the last few days using it a lot, and my feeling towards the it have changed quite a bit.

I think it was the right decision. Overall, it feels like an overall improved experience, even with its current rough edges. Exactly what I think we need to win over a wider audience and have them fall in love with Ubuntu head over heels. Everything is starting to feel much more tightly integrated and with a purpose, as well as some eye-candy sprinkled in a lot of the right places.
I'm really glad Canonical decided to invest to heavily in such a risky and insanely complicated task, Natty is probably one of the most exciting releases I can remember.

There are still a few key challenges ahead, most notably to me is making the design process more open and inclusive, but still being able to deliver something that feels polished and not a pile of consensus between people who have gotten good at arguing. The Ayatana community does seem to be slowly growing, though, so the future looks pretty bright. Getting the right balance between Canonical and a community around design feels like one of the hardest problems to solve, luckily, Canonical continues to hire the brightest and most enthusiastic minds around, so I'm sure it will eventually feel like a solved problem.

I think it's been almost 6 years since I landed in the Ubuntu world, I've done all kinds of things in the community ranging from starting and building the Argentine LoCo to editing the Ubuntu Weekly Newsletter, to evaluating new Ubuntu members in the Americas region. With its ups and down, great press and wild controversies, it still feels like the best place to be.

Read more
mandel

For those that do not what is the keyring module here is the official description:

The Python keyring lib provides a easy way to access the system keyring service from python. It can be used in any application that needs safe password storage.

The module is a very nice idea and has been rather useful during the Ubuntu One port. I just have a problem with it which is the lack of a method to delete a password.

I have forked the project in bitbucket and added the missing methods. Of course I have requested a pull from the original project, so unless there are problems the new code should be ‘landable’ (is landable even a word?) in trunk and usable.

For those that cannot wait for that, you can grab the code by doing:

hg clone https://bitbucket.org/mandel/pykeyring-delete-password

Read more
mandel

Before I introduce the code, let me say that this is not a 100% exact implementation of the interfaces that can be found in pyinotify but the implementation of a subset that matches my needs. The main idea of creating this post is to give an example of the implementation of such a library for Windows trying to reuse the code that can be found in pyinotify.

Once I have excused my self, let get into the code. First of all, there are a number of classes from pyinotify that we can use in our code. That subset of classes is the below code which I grabbed from pyinotify git:

#!/usr/bin/env python
 
# pyinotify.py - python interface to inotify
# Copyright (c) 2010 Sebastien Martini <seb@dbzteam.org>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""Platform agnostic code grabed from pyinotify."""
import logging
import os
 
COMPATIBILITY_MODE = False
 
 
class RawOutputFormat:
    """
    Format string representations.
    """
    def __init__(self, format=None):
        self.format = format or {}
 
    def simple(self, s, attribute):
        if not isinstance(s, str):
            s = str(s)
        return (self.format.get(attribute, '') + s +
                self.format.get('normal', ''))
 
    def punctuation(self, s):
        """Punctuation color."""
        return self.simple(s, 'normal')
 
    def field_value(self, s):
        """Field value color."""
        return self.simple(s, 'purple')
 
    def field_name(self, s):
        """Field name color."""
        return self.simple(s, 'blue')
 
    def class_name(self, s):
        """Class name color."""
        return self.format.get('red', '') + self.simple(s, 'bold')
 
output_format = RawOutputFormat()
 
 
class EventsCodes:
    """
    Set of codes corresponding to each kind of events.
    Some of these flags are used to communicate with inotify, whereas
    the others are sent to userspace by inotify notifying some events.
 
    @cvar IN_ACCESS: File was accessed.
    @type IN_ACCESS: int
    @cvar IN_MODIFY: File was modified.
    @type IN_MODIFY: int
    @cvar IN_ATTRIB: Metadata changed.
    @type IN_ATTRIB: int
    @cvar IN_CLOSE_WRITE: Writtable file was closed.
    @type IN_CLOSE_WRITE: int
    @cvar IN_CLOSE_NOWRITE: Unwrittable file closed.
    @type IN_CLOSE_NOWRITE: int
    @cvar IN_OPEN: File was opened.
    @type IN_OPEN: int
    @cvar IN_MOVED_FROM: File was moved from X.
    @type IN_MOVED_FROM: int
    @cvar IN_MOVED_TO: File was moved to Y.
    @type IN_MOVED_TO: int
    @cvar IN_CREATE: Subfile was created.
    @type IN_CREATE: int
    @cvar IN_DELETE: Subfile was deleted.
    @type IN_DELETE: int
    @cvar IN_DELETE_SELF: Self (watched item itself) was deleted.
    @type IN_DELETE_SELF: int
    @cvar IN_MOVE_SELF: Self (watched item itself) was moved.
    @type IN_MOVE_SELF: int
    @cvar IN_UNMOUNT: Backing fs was unmounted.
    @type IN_UNMOUNT: int
    @cvar IN_Q_OVERFLOW: Event queued overflowed.
    @type IN_Q_OVERFLOW: int
    @cvar IN_IGNORED: File was ignored.
    @type IN_IGNORED: int
    @cvar IN_ONLYDIR: only watch the path if it is a directory (new
                      in kernel 2.6.15).
    @type IN_ONLYDIR: int
    @cvar IN_DONT_FOLLOW: don't follow a symlink (new in kernel 2.6.15).
                          IN_ONLYDIR we can make sure that we don't watch
                          the target of symlinks.
    @type IN_DONT_FOLLOW: int
    @cvar IN_MASK_ADD: add to the mask of an already existing watch (new
                       in kernel 2.6.14).
    @type IN_MASK_ADD: int
    @cvar IN_ISDIR: Event occurred against dir.
    @type IN_ISDIR: int
    @cvar IN_ONESHOT: Only send event once.
    @type IN_ONESHOT: int
    @cvar ALL_EVENTS: Alias for considering all of the events.
    @type ALL_EVENTS: int
    """
 
    # The idea here is 'configuration-as-code' - this way, we get
    # our nice class constants, but we also get nice human-friendly text
    # mappings to do lookups against as well, for free:
    FLAG_COLLECTIONS = {'OP_FLAGS': {
        'IN_ACCESS'        : 0x00000001,  # File was accessed
        'IN_MODIFY'        : 0x00000002,  # File was modified
        'IN_ATTRIB'        : 0x00000004,  # Metadata changed
        'IN_CLOSE_WRITE'   : 0x00000008,  # Writable file was closed
        'IN_CLOSE_NOWRITE' : 0x00000010,  # Unwritable file closed
        'IN_OPEN'          : 0x00000020,  # File was opened
        'IN_MOVED_FROM'    : 0x00000040,  # File was moved from X
        'IN_MOVED_TO'      : 0x00000080,  # File was moved to Y
        'IN_CREATE'        : 0x00000100,  # Subfile was created
        'IN_DELETE'        : 0x00000200,  # Subfile was deleted
        'IN_DELETE_SELF'   : 0x00000400,  # Self (watched item itself)
                                          # was deleted
        'IN_MOVE_SELF'     : 0x00000800,  # Self(watched item itself) was moved
        },
                        'EVENT_FLAGS': {
        'IN_UNMOUNT'       : 0x00002000,  # Backing fs was unmounted
        'IN_Q_OVERFLOW'    : 0x00004000,  # Event queued overflowed
        'IN_IGNORED'       : 0x00008000,  # File was ignored
        },
                        'SPECIAL_FLAGS': {
        'IN_ONLYDIR'       : 0x01000000,  # only watch the path if it is a
                                          # directory
        'IN_DONT_FOLLOW'   : 0x02000000,  # don't follow a symlink
        'IN_MASK_ADD'      : 0x20000000,  # add to the mask of an already
                                          # existing watch
        'IN_ISDIR'         : 0x40000000,  # event occurred against dir
        'IN_ONESHOT'       : 0x80000000,  # only send event once
        },
                        }
 
    def maskname(mask):
        """
        Returns the event name associated to mask. IN_ISDIR is appended to
        the result when appropriate. Note: only one event is returned, because
        only one event can be raised at a given time.
 
        @param mask: mask.
        @type mask: int
        @return: event name.
        @rtype: str
        """
        ms = mask
        name = '%s'
        if mask & IN_ISDIR:
            ms = mask - IN_ISDIR
            name = '%s|IN_ISDIR'
        return name % EventsCodes.ALL_VALUES[ms]
 
    maskname = staticmethod(maskname)
 
 
# So let's now turn the configuration into code
EventsCodes.ALL_FLAGS = {}
EventsCodes.ALL_VALUES = {}
for flagc, valc in EventsCodes.FLAG_COLLECTIONS.items():
    # Make the collections' members directly accessible through the
    # class dictionary
    setattr(EventsCodes, flagc, valc)
 
    # Collect all the flags under a common umbrella
    EventsCodes.ALL_FLAGS.update(valc)
 
    # Make the individual masks accessible as 'constants' at globals() scope
    # and masknames accessible by values.
    for name, val in valc.items():
        globals()[name] = val
        EventsCodes.ALL_VALUES[val] = name
 
 
# all 'normal' events
ALL_EVENTS = reduce(lambda x, y: x | y, EventsCodes.OP_FLAGS.values())
EventsCodes.ALL_FLAGS['ALL_EVENTS'] = ALL_EVENTS
EventsCodes.ALL_VALUES[ALL_EVENTS] = 'ALL_EVENTS'
 
 
class _Event:
    """
    Event structure, represent events raised by the system. This
    is the base class and should be subclassed.
 
    """
    def __init__(self, dict_):
        """
        Attach attributes (contained in dict_) to self.
 
        @param dict_: Set of attributes.
        @type dict_: dictionary
        """
        for tpl in dict_.items():
            setattr(self, *tpl)
 
    def __repr__(self):
        """
        @return: Generic event string representation.
        @rtype: str
        """
        s = ''
        for attr, value in sorted(self.__dict__.items(), key=lambda x: x[0]):
            if attr.startswith('_'):
                continue
            if attr == 'mask':
                value = hex(getattr(self, attr))
            elif isinstance(value, basestring) and not value:
                value = "''"
            s += ' %s%s%s' % (output_format.field_name(attr),
                              output_format.punctuation('='),
                              output_format.field_value(value))
 
        s = '%s%s%s %s' % (output_format.punctuation('<'),
                           output_format.class_name(self.__class__.__name__),
                           s,
                           output_format.punctuation('>'))
        return s
 
    def __str__(self):
        return repr(self)
 
 
class _RawEvent(_Event):
    """
    Raw event, it contains only the informations provided by the system.
    It doesn't infer anything.
    """
    def __init__(self, wd, mask, cookie, name):
        """
        @param wd: Watch Descriptor.
        @type wd: int
        @param mask: Bitmask of events.
        @type mask: int
        @param cookie: Cookie.
        @type cookie: int
        @param name: Basename of the file or directory against which the
                     event was raised in case where the watched directory
                     is the parent directory. None if the event was raised
                     on the watched item itself.
        @type name: string or None
        """
        # Use this variable to cache the result of str(self), this object
        # is immutable.
        self._str = None
        # name: remove trailing '\0'
        d = {'wd': wd,
             'mask': mask,
             'cookie': cookie,
             'name': name.rstrip('\0')}
        _Event.__init__(self, d)
        logging.debug(str(self))
 
    def __str__(self):
        if self._str is None:
            self._str = _Event.__str__(self)
        return self._str
 
 
class Event(_Event):
    """
    This class contains all the useful informations about the observed
    event. However, the presence of each field is not guaranteed and
    depends on the type of event. In effect, some fields are irrelevant
    for some kind of event (for example 'cookie' is meaningless for
    IN_CREATE whereas it is mandatory for IN_MOVE_TO).
 
    The possible fields are:
      - wd (int): Watch Descriptor.
      - mask (int): Mask.
      - maskname (str): Readable event name.
      - path (str): path of the file or directory being watched.
      - name (str): Basename of the file or directory against which the
              event was raised in case where the watched directory
              is the parent directory. None if the event was raised
              on the watched item itself. This field is always provided
              even if the string is ''.
      - pathname (str): Concatenation of 'path' and 'name'.
      - src_pathname (str): Only present for IN_MOVED_TO events and only in
              the case where IN_MOVED_FROM events are watched too. Holds the
              source pathname from where pathname was moved from.
      - cookie (int): Cookie.
      - dir (bool): True if the event was raised against a directory.
 
    """
    def __init__(self, raw):
        """
        Concretely, this is the raw event plus inferred infos.
        """
        _Event.__init__(self, raw)
        self.maskname = EventsCodes.maskname(self.mask)
        if COMPATIBILITY_MODE:
            self.event_name = self.maskname
        try:
            if self.name:
                self.pathname = os.path.abspath(os.path.join(self.path,
                                                             self.name))
            else:
                self.pathname = os.path.abspath(self.path)
        except AttributeError, err:
            # Usually it is not an error some events are perfectly valids
            # despite the lack of these attributes.
            logging.debug(err)
 
 
class _ProcessEvent:
    """
    Abstract processing event class.
    """
    def __call__(self, event):
        """
        To behave like a functor the object must be callable.
        This method is a dispatch method. Its lookup order is:
          1. process_MASKNAME method
          2. process_FAMILY_NAME method
          3. otherwise calls process_default
 
        @param event: Event to be processed.
        @type event: Event object
        @return: By convention when used from the ProcessEvent class:
                 - Returning False or None (default value) means keep on
                 executing next chained functors (see chain.py example).
                 - Returning True instead means do not execute next
                   processing functions.
        @rtype: bool
        @raise ProcessEventError: Event object undispatchable,
                                  unknown event.
        """
        stripped_mask = event.mask - (event.mask & IN_ISDIR)
        maskname = EventsCodes.ALL_VALUES.get(stripped_mask)
        if maskname is None:
            raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask)
 
        # 1- look for process_MASKNAME
        meth = getattr(self, 'process_' + maskname, None)
        if meth is not None:
            return meth(event)
        # 2- look for process_FAMILY_NAME
        meth = getattr(self, 'process_IN_' + maskname.split('_')[1], None)
        if meth is not None:
            return meth(event)
        # 3- default call method process_default
        return self.process_default(event)
 
    def __repr__(self):
        return '<%s>' % self.__class__.__name__
 
 
class ProcessEvent(_ProcessEvent):
    """
    Process events objects, can be specialized via subclassing, thus its
    behavior can be overriden:
 
    Note: you should not override __init__ in your subclass instead define
    a my_init() method, this method will be called automatically from the
    constructor of this class with its optionals parameters.
 
      1. Provide specialized individual methods, e.g. process_IN_DELETE for
         processing a precise type of event (e.g. IN_DELETE in this case).
      2. Or/and provide methods for processing events by 'family', e.g.
         process_IN_CLOSE method will process both IN_CLOSE_WRITE and
         IN_CLOSE_NOWRITE events (if process_IN_CLOSE_WRITE and
         process_IN_CLOSE_NOWRITE aren't defined though).
      3. Or/and override process_default for catching and processing all
         the remaining types of events.
    """
    pevent = None
 
    def __init__(self, pevent=None, **kargs):
        """
        Enable chaining of ProcessEvent instances.
 
        @param pevent: Optional callable object, will be called on event
                       processing (before self).
        @type pevent: callable
        @param kargs: This constructor is implemented as a template method
                      delegating its optionals keyworded arguments to the
                      method my_init().
        @type kargs: dict
        """
        self.pevent = pevent
        self.my_init(**kargs)
 
    def my_init(self, **kargs):
        """
        This method is called from ProcessEvent.__init__(). This method is
        empty here and must be redefined to be useful. In effect, if you
        need to specifically initialize your subclass' instance then you
        just have to override this method in your subclass. Then all the
        keyworded arguments passed to ProcessEvent.__init__() will be
        transmitted as parameters to this method. Beware you MUST pass
        keyword arguments though.
 
        @param kargs: optional delegated arguments from __init__().
        @type kargs: dict
        """
        pass
 
    def __call__(self, event):
        stop_chaining = False
        if self.pevent is not None:
            # By default methods return None so we set as guideline
            # that methods asking for stop chaining must explicitely
            # return non None or non False values, otherwise the default
            # behavior will be to accept chain call to the corresponding
            # local method.
            stop_chaining = self.pevent(event)
        if not stop_chaining:
            return _ProcessEvent.__call__(self, event)
 
    def nested_pevent(self):
        return self.pevent
 
    def process_IN_Q_OVERFLOW(self, event):
        """
        By default this method only reports warning messages, you can
        overredide it by subclassing ProcessEvent and implement your own
        process_IN_Q_OVERFLOW method. The actions you can take on receiving
        this event is either to update the variable max_queued_events in order
        to handle more simultaneous events or to modify your code in order to
        accomplish a better filtering diminishing the number of raised events.
        Because this method is defined, IN_Q_OVERFLOW will never get
        transmitted as arguments to process_default calls.
 
        @param event: IN_Q_OVERFLOW event.
        @type event: dict
        """
        log.warning('Event queue overflowed.')
 
    def process_default(self, event):
        """
        Default processing event method. By default does nothing. Subclass
        ProcessEvent and redefine this method in order to modify its behavior.
 
        @param event: Event to be processed. Can be of any type of events but
                      IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
        @type event: Event instance
        """
        pass
 
 
class PrintAllEvents(ProcessEvent):
    """
    Dummy class used to print events strings representations. For instance this
    class is used from command line to print all received events to stdout.
    """
    def my_init(self, out=None):
        """
        @param out: Where events will be written.
        @type out: Object providing a valid file object interface.
        """
        if out is None:
            out = sys.stdout
        self._out = out
 
    def process_default(self, event):
        """
        Writes event string representation to file object provided to
        my_init().
 
        @param event: Event to be processed. Can be of any type of events but
                      IN_Q_OVERFLOW events (see method process_IN_Q_OVERFLOW).
        @type event: Event instance
        """
        self._out.write(str(event))
        self._out.write('\n')
        self._out.flush()
 
 
class WatchManagerError(Exception):
    """
    WatchManager Exception. Raised on error encountered on watches
    operations.
 
    """
    def __init__(self, msg, wmd):
        """
        @param msg: Exception string's description.
        @type msg: string
        @param wmd: This dictionary contains the wd assigned to paths of the
                    same call for which watches were successfully added.
        @type wmd: dict
        """
        self.wmd = wmd
        Exception.__init__(self, msg)

Unfortunatly we need to implement the code that talks with the Win32 API to be able to retrieve the events in the file system. In my design this is done by the Watch class that looks like this:

# Author: Manuel de la Pena <manuel@canonical.com>
#
# Copyright 2011 Canonical Ltd.
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3, as published
# by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranties of
# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
# PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program.  If not, see <http://www.gnu.org/licenses/>.
"""File notifications on windows."""
 
import logging
import os
import re
 
import winerror
 
from Queue import Queue, Empty
from threading import Thread
from uuid import uuid4
from twisted.internet import task, reactor
from win32con import (
    FILE_SHARE_READ,
    FILE_SHARE_WRITE,
    FILE_FLAG_BACKUP_SEMANTICS,
    FILE_NOTIFY_CHANGE_FILE_NAME,
    FILE_NOTIFY_CHANGE_DIR_NAME,
    FILE_NOTIFY_CHANGE_ATTRIBUTES,
    FILE_NOTIFY_CHANGE_SIZE,
    FILE_NOTIFY_CHANGE_LAST_WRITE,
    FILE_NOTIFY_CHANGE_SECURITY,
    OPEN_EXISTING
)
from win32file import CreateFile, ReadDirectoryChangesW
from ubuntuone.platform.windows.pyinotify import (
    Event,
    WatchManagerError,
    ProcessEvent,
    PrintAllEvents,
    IN_OPEN,
    IN_CLOSE_NOWRITE,
    IN_CLOSE_WRITE,
    IN_CREATE,
    IN_ISDIR,
    IN_DELETE,
    IN_MOVED_FROM,
    IN_MOVED_TO,
    IN_MODIFY,
    IN_IGNORED
)
from ubuntuone.syncdaemon.filesystem_notifications import (
    GeneralINotifyProcessor
)
from ubuntuone.platform.windows.os_helper import (
    LONG_PATH_PREFIX,
    abspath,
    listdir
)
 
# constant found in the msdn documentation:
# http://msdn.microsoft.com/en-us/library/ff538834(v=vs.85).aspx
FILE_LIST_DIRECTORY = 0x0001
FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x00000020
FILE_NOTIFY_CHANGE_CREATION = 0x00000040
 
# a map between the few events that we have on windows and those
# found in pyinotify
WINDOWS_ACTIONS = {
  1: IN_CREATE,
  2: IN_DELETE,
  3: IN_MODIFY,
  4: IN_MOVED_FROM,
  5: IN_MOVED_TO
}
 
# translates quickly the event and it's is_dir state to our standard events
NAME_TRANSLATIONS = {
    IN_OPEN: 'FS_FILE_OPEN',
    IN_CLOSE_NOWRITE: 'FS_FILE_CLOSE_NOWRITE',
    IN_CLOSE_WRITE: 'FS_FILE_CLOSE_WRITE',
    IN_CREATE: 'FS_FILE_CREATE',
    IN_CREATE | IN_ISDIR: 'FS_DIR_CREATE',
    IN_DELETE: 'FS_FILE_DELETE',
    IN_DELETE | IN_ISDIR: 'FS_DIR_DELETE',
    IN_MOVED_FROM: 'FS_FILE_DELETE',
    IN_MOVED_FROM | IN_ISDIR: 'FS_DIR_DELETE',
    IN_MOVED_TO: 'FS_FILE_CREATE',
    IN_MOVED_TO | IN_ISDIR: 'FS_DIR_CREATE',
}
 
# the default mask to be used in the watches added by the FilesystemMonitor
# class
FILESYSTEM_MONITOR_MASK = FILE_NOTIFY_CHANGE_FILE_NAME | \
    FILE_NOTIFY_CHANGE_DIR_NAME | \
    FILE_NOTIFY_CHANGE_ATTRIBUTES | \
    FILE_NOTIFY_CHANGE_SIZE | \
    FILE_NOTIFY_CHANGE_LAST_WRITE | \
    FILE_NOTIFY_CHANGE_SECURITY | \
    FILE_NOTIFY_CHANGE_LAST_ACCESS
 
 
# The implementation of the code that is provided as the pyinotify
# substitute
class Watch(object):
    """Implement the same functions as pyinotify.Watch."""
 
    def __init__(self, watch_descriptor, path, mask, auto_add,
        events_queue=None, exclude_filter=None, proc_fun=None):
        super(Watch, self).__init__()
        self.log = logging.getLogger('ubuntuone.platform.windows.' +
            'filesystem_notifications.Watch')
        self._watching = False
        self._descriptor = watch_descriptor
        self._auto_add = auto_add
        self.exclude_filter = None
        self._proc_fun = proc_fun
        self._cookie = None
        self._source_pathname = None
        # remember the subdirs we have so that when we have a delete we can
        # check if it was a remove
        self._subdirs = []
        # ensure that we work with an abspath and that we can deal with
        # long paths over 260 chars.
        self._path = os.path.abspath(path)
        if not self._path.startswith(LONG_PATH_PREFIX):
            self._path = LONG_PATH_PREFIX + self._path
        self._mask = mask
        # lets make the q as big as possible
        self._raw_events_queue = Queue()
        if not events_queue:
            events_queue = Queue()
        self.events_queue = events_queue
 
    def _path_is_dir(self, path):
        """"Check if the path is a dir and update the local subdir list."""
        self.log.debug('Testing if path "%s" is a dir', path)
        is_dir = False
        if os.path.exists(path):
            is_dir = os.path.isdir(path)
        else:
            self.log.debug('Path "%s" was deleted subdirs are %s.',
                path, self._subdirs)
            # we removed the path, we look in the internal list
            if path in self._subdirs:
                is_dir = True
                self._subdirs.remove(path)
        if is_dir:
            self.log.debug('Adding %s to subdirs %s', path, self._subdirs)
            self._subdirs.append(path)
        return is_dir
 
    def _process_events(self):
        """Process the events form the queue."""
        # we transform the events to be the same as the one in pyinotify
        # and then use the proc_fun
        while self._watching or not self._raw_events_queue.empty():
            file_name, action = self._raw_events_queue.get()
            # map the windows events to the pyinotify ones, tis is dirty but
            # makes the multiplatform better, linux was first :P
            is_dir = self._path_is_dir(file_name)
            if os.path.exists(file_name):
                is_dir = os.path.isdir(file_name)
            else:
                # we removed the path, we look in the internal list
                if file_name in self._subdirs:
                    is_dir = True
                    self._subdirs.remove(file_name)
            if is_dir:
                self._subdirs.append(file_name)
            mask = WINDOWS_ACTIONS[action]
            head, tail = os.path.split(file_name)
            if is_dir:
                mask |= IN_ISDIR
            event_raw_data = {
                'wd': self._descriptor,
                'dir': is_dir,
                'mask': mask,
                'name': tail,
                'path': head.replace(self.path, '.')
            }
            # by the way in which the win api fires the events we know for
            # sure that no move events will be added in the wrong order, this
            # is kind of hacky, I dont like it too much
            if WINDOWS_ACTIONS[action] == IN_MOVED_FROM:
                self._cookie = str(uuid4())
                self._source_pathname = tail
                event_raw_data['cookie'] = self._cookie
            if WINDOWS_ACTIONS[action] == IN_MOVED_TO:
                event_raw_data['src_pathname'] = self._source_pathname
                event_raw_data['cookie'] = self._cookie
            event = Event(event_raw_data)
            # FIXME: event deduces the pathname wrong and we need manually
            # set it
            event.pathname = file_name
            # add the event only if we do not have an exclude filter or
            # the exclude filter returns False, that is, the event will not
            # be excluded
            if not self.exclude_filter or not self.exclude_filter(event):
                self.log.debug('Addding event %s to queue.', event)
                self.events_queue.put(event)
 
    def _watch(self):
        """Watch a path that is a directory."""
        # we are going to be using the ReadDirectoryChangesW whihc requires
        # a direcotry handle and the mask to be used.
        handle = CreateFile(
            self._path,
            FILE_LIST_DIRECTORY,
            FILE_SHARE_READ | FILE_SHARE_WRITE,
            None,
            OPEN_EXISTING,
            FILE_FLAG_BACKUP_SEMANTICS,
            None
        )
        self.log.debug('Watchng path %s.', self._path)
        while self._watching:
            # important information to know about the parameters:
            # param 1: the handle to the dir
            # param 2: the size to be used in the kernel to store events
            # that might be lost whilw the call is being performed. This
            # is complicates to fine tune since if you make lots of watcher
            # you migh used to much memory and make your OS to BSOD
            results = ReadDirectoryChangesW(
                handle,
                1024,
                self._auto_add,
                self._mask,
                None,
                None
            )
            # add the diff events to the q so that the can be processed no
            # matter the speed.
            for action, file in results:
                full_filename = os.path.join(self._path, file)
                self._raw_events_queue.put((full_filename, action))
                self.log.debug('Added %s to raw events queue.',
                    (full_filename, action))
 
    def start_watching(self):
        """Tell the watch to start processing events."""
        # get the diff dirs in the path
        for current_child in listdir(self._path):
            full_child_path = os.path.join(self._path, current_child)
            if os.path.isdir(full_child_path):
                self._subdirs.append(full_child_path)
        # start to diff threads, one to watch the path, the other to
        # process the events.
        self.log.debug('Sart watching path.')
        self._watching = True
        watch_thread = Thread(target=self._watch,
            name='Watch(%s)' % self._path)
        process_thread = Thread(target=self._process_events,
            name='Process(%s)' % self._path)
        process_thread.start()
        watch_thread.start()
 
    def stop_watching(self):
        """Tell the watch to stop processing events."""
        self._watching = False
        self._subdirs = []
 
    def update(self, mask, proc_fun=None, auto_add=False):
        """Update the info used by the watcher."""
        self.log.debug('update(%s, %s, %s)', mask, proc_fun, auto_add)
        self._mask = mask
        self._proc_fun = proc_fun
        self._auto_add = auto_add
 
    @property
    def path(self):
        """Return the patch watched."""
        return self._path
 
    @property
    def auto_add(self):
        return self._auto_add
 
    @property
    def proc_fun(self):
        return self._proc_fun
 
 
class WatchManager(object):
    """Implement the same functions as pyinotify.WatchManager."""
 
    def __init__(self, exclude_filter=lambda path: False):
        """Init the manager to keep trak of the different watches."""
        super(WatchManager, self).__init__()
        self.log = logging.getLogger('ubuntuone.platform.windows.'
            + 'filesystem_notifications.WatchManager')
        self._wdm = {}
        self._wd_count = 0
        self._exclude_filter = exclude_filter
        self._events_queue = Queue()
        self._ignored_paths = []
 
    def stop(self):
        """Close the manager and stop all watches."""
        self.log.debug('Stopping watches.')
        for current_wd in self._wdm:
            self._wdm[current_wd].stop_watching()
            self.log.debug('Watch for %s stopped.', self._wdm[current_wd].path)
 
    def get_watch(self, wd):
        """Return the watch with the given descriptor."""
        return self._wdm[wd]
 
    def del_watch(self, wd):
        """Delete the watch with the given descriptor."""
        try:
            watch = self._wdm[wd]
            watch.stop_watching()
            del self._wdm[wd]
            self.log.debug('Watch %s removed.', wd)
        except KeyError, e:
            logging.error(str(e))
 
    def _add_single_watch(self, path, mask, proc_fun=None, auto_add=False,
        quiet=True, exclude_filter=None):
        self.log.debug('add_single_watch(%s, %s, %s, %s, %s, %s)', path, mask,
            proc_fun, auto_add, quiet, exclude_filter)
        self._wdm[self._wd_count] = Watch(self._wd_count, path, mask,
            auto_add, events_queue=self._events_queue,
            exclude_filter=exclude_filter, proc_fun=proc_fun)
        self._wdm[self._wd_count].start_watching()
        self._wd_count += 1
        self.log.debug('Watch count increased to %s', self._wd_count)
 
    def add_watch(self, path, mask, proc_fun=None, auto_add=False,
        quiet=True, exclude_filter=None):
        if hasattr(path, '__iter__'):
            self.log.debug('Added collection of watches.')
            # we are dealing with a collection of paths
            for current_path in path:
                if not self.get_wd(current_path):
                    self._add_single_watch(current_path, mask, proc_fun,
                        auto_add, quiet, exclude_filter)
        elif not self.get_wd(path):
            self.log.debug('Adding single watch.')
            self._add_single_watch(path, mask, proc_fun, auto_add,
                quiet, exclude_filter)
 
    def update_watch(self, wd, mask=None, proc_fun=None, rec=False,
                     auto_add=False, quiet=True):
        try:
            watch = self._wdm[wd]
            watch.stop_watching()
            self.log.debug('Stopped watch on %s for update.', watch.path)
            # update the data and restart watching
            auto_add = auto_add or rec
            watch.update(mask, proc_fun=proc_fun, auto_add=auto_add)
            # only start the watcher again if the mask was given, otherwhise
            # we are not watchng and therefore do not care
            if mask:
                watch.start_watching()
        except KeyError, e:
            self.log.error(str(e))
            if not quiet:
                raise WatchManagerError('Watch %s was not found' % wd, {})
 
    def get_wd(self, path):
        """Return the watcher that is used to watch the given path."""
        for current_wd in self._wdm:
            if self._wdm[current_wd].path in path and \
                self._wdm[current_wd].auto_add:
                return current_wd
 
    def get_path(self, wd):
        """Return the path watched by the wath with the given wd."""
        watch_ = self._wmd.get(wd)
        if watch:
            return watch.path
 
    def rm_watch(self, wd, rec=False, quiet=True):
        """Remove the the watch with the given wd."""
        try:
            watch = self._wdm[wd]
            watch.stop_watching()
            del self._wdm[wd]
        except KeyrError, err:
            self.log.error(str(err))
            if not quiet:
                raise WatchManagerError('Watch %s was not found' % wd, {})
 
    def rm_path(self, path):
        """Remove a watch to the given path."""
        # it would be very tricky to remove a subpath from a watcher that is
        # looking at changes in ther kids. To make it simpler and less error
        # prone (and even better performant since we use less threads) we will
        # add a filter to the events in the watcher so that the events from
        # that child are not received :)
        def ignore_path(event):
            """Ignore an event if it has a given path."""
            is_ignored = False
            for ignored_path in self._ignored_paths:
                if ignore_path in event.pathname:
                    return True
            return False
 
        wd = self.get_wd(path)
        if wd:
            if self._wdm[wd].path == path:
                self.log.debug('Removing watch for path "%s"', path)
                self.rm_watch(wd)
            else:
                self.log.debug('Adding exclude filter for "%s"', path)
                # we have a watch that cotains the path as a child path
                if not path in self._ignored_paths:
                    self._ignored_paths.append(path)
                # FIXME: This assumes that we do not have other function
                # which in our usecase is correct, but what is we move this
                # to other projects evet?!? Maybe using the manager
                # exclude_filter is better
                if not self._wdm[wd].exclude_filter:
                    self._wdm[wd].exclude_filter = ignore_path
 
    @property
    def watches(self):
        """Return a reference to the dictionary that contains the watches."""
        return self._wdm
 
    @property
    def events_queue(self):
        """Return the queue with the events that the manager contains."""
        return self._events_queue
 
 
class Notifier(object):
    """
    Read notifications, process events. Inspired by the pyinotify.Notifier
    """
 
    def __init__(self, watch_manager, default_proc_fun=None, read_freq=0,
                 threshold=10, timeout=-1):
        """Init to process event according to the given timeout & threshold."""
        super(Notifier, self).__init__()
        self.log = logging.getLogger('ubuntuone.platform.windows.'
            + 'filesystem_notifications.Notifier')
        # Watch Manager instance
        self._watch_manager = watch_manager
        # Default processing method
        self._default_proc_fun = default_proc_fun
        if default_proc_fun is None:
            self._default_proc_fun = PrintAllEvents()
        # Loop parameters
        self._read_freq = read_freq
        self._threshold = threshold
        self._timeout = timeout
 
    def proc_fun(self):
        return self._default_proc_fun
 
    def process_events(self):
        """
        Process the event given the threshold and the timeout.
        """
        self.log.debug('Processing events with threashold: %s and timeout: %s',
            self._threshold, self._timeout)
        # we will process an amount of events equal to the threshold of
        # the notifier and will block for the amount given by the timeout
        processed_events = 0
        while processed_events < self._threshold:
            try:
                raw_event = None
                if not self._timeout or self._timeout < 0:
                    raw_event = self._watch_manager.events_queue.get(
                        block=False)
                else:
                    raw_event = self._watch_manager.events_queue.get(
                        timeout=self._timeout)
                watch = self._watch_manager.get_watch(raw_event.wd)
                if watch is None:
                    # Not really sure how we ended up here, nor how we should
                    # handle these types of events and if it is appropriate to
                    # completly skip them (like we are doing here).
                    self.log.warning('Unable to retrieve Watch object '
                        + 'associated to %s', raw_event)
                    processed_events += 1
                    continue
                if watch and watch.proc_fun:
                    self.log.debug('Executing proc_fun from watch.')
                    watch.proc_fun(raw_event)  # user processings
                else:
                    self.log.debug('Executing default_proc_fun')
                    self._default_proc_fun(raw_event)
                processed_events += 1
            except Empty:
                # increase the number of processed events, and continue
                processed_events += 1
                continue
 
    def stop(self):
        """Stop processing events and the watch manager."""
        self._watch_manager.stop()

While one of the threads is retrieving the events from the file system, the second one process them so that the will be exposed as pyinotify events. I have done so because I did not want to deal with OVERLAP structures for asyn operations in Win32 and because I wanted to use pyinotify events so that if someone with experience in pyinotify looks at the output, he can easily understand it. I really like this approach because it allowed me to reuse a fair amount of logic hat we had in the Ubuntu client and to approach the port in a very TDD way since the tests I’ve used are the same ones as the ones found on Ubuntu :)

Read more