I just released a new PyGObject for GNOME 3.7.91. This brings some marshalling fixes, plugs tons of memory leaks, and now raises a Python
DeprecationWarning when your code calls a method which is marked as deprecated in the typelib. Please note that Python hides them by default, so if you are interested in those you need to run python with the
Thanks to all contributors!
This fixes the “bind: address already in use” errors that were popping up in X.org and upower when running under umockdev, and finally gets us working packages for Ubuntu 12.04 LTS (in the daily-builds PPA).Read more
I just released umockdev 0.2.
The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:
$ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null
and load that back into a testbed with X.org using the dummy driver:
cat <<EOF > xorg-dummy.conf Section "Device" Identifier "test" Driver "dummy" EndSection EOF $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \ Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5
Then e. g.
DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won’t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user.
This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on.
This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that.
Attention: This version does not work any more with recorded ioctl files from version 0.1.
More detailled list of changes:
I just released a new PyGObject for GNOME 3.7.90, with a nice set of bug fixes and some internal code cleanup. Thanks to all contributors!
umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.
This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard
After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!
umockdev consists of the following parts:
umockdev-recordprogram generates text dumps (conventionally called
*.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called
UMockdevTestbedGObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load
*.ioctlrecords into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in
docs/referencein the source directory.
umockdev-runprogram builds a sandbox using libumockdev, can load
*.ioctlfiles into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need
umockdev-wrapperwith this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.
So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:
Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
$ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
$ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
/dev/bus/usb/001/012merely echoes what is in
mobile.umockdevand it is independent of what is actually in the real /dev directory. You can rename that device in the generated
*.umockdevfiles and on the command line.
$ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders
If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through
umockdev-run. As a simple example, let’s pretend we want to write tests for upower.
Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with
upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:
umockdev-wrapper python3 docs/examples/battery.py
and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.
The gist of it is that you create a test bed with
UMockdevTestbed *testbed = umockdev_testbed_new ();
and add a device with certain sysfs attributes and udev properties with
gchar *sys_bat = umockdev_testbed_add_device ( testbed, "power_supply", "fakeBAT0", NULL, /* attributes */ "type", "Battery", "present", "1", "status", "Discharging", "energy_full", "60000000", "energy_full_design", "80000000", "energy_now", "48000000", "voltage_now", "12000000", NULL, /* properties */ "POWER_SUPPLY_ONLINE", "1", NULL);
You can then e. g. change an attribute and synthesize a “change” uevent with
umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000"); umockdev_testbed_uevent (testbed, sys_bat, "change");
With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.
The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of
umockdev-record, so any suggestion is welcome.
One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.
I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.
The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.
Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).
Thanks for your attention and happy testing!Read more
I just released a new PyGObject for GNOME 3.7.5. Unfortunately
master.gnome.org is out of space right now, so I put the new tarball on my Ubuntu people account for the time being.
This again brings a nice set of memory leak and bug fixes, some more reduction of static bindings, and better support for building under Windows.
Thanks to all contributors!
I just released a new PyGObject, for GNOME 3.7.4 which is due on Wednesday.
This release saw a lot of bug and memory leak fixes again, as well as enabling some more data types such as
GParamSpec, boxed list properties, or directly setting string members in structs.
Thanks to all contributors!
Summary of changes (see change log for complete details):
I just released a new PyGObject, for GNOME 3.7.3 which is due on Wednesday.
This is mostly a bug fix release. There is one API addition, it brings back official support for calling
GLib.io_add_watch() with a Python file object or fd as first argument, in addition to the official API which expects a
GLib.IOChannel object. These modes were marked as deprecated in 3.7.2 (only).
Thanks to all contributors!
Summary of changes (see change log for complete details):
UPDATE: A command porting walk-through has beed added to the documentation.
Back around UDS time, I began work on a reboot of Quickly, Ubuntu’s application development tool. After two months and just short of 4000 lines of code written, I’m pleased to announce that the inner-workings of the new code is very nearly complete! Now I’ve reached the point where I need your help.
First, let me go back to what I said needed to be done last time. Port from Python 2 to Python 3: Done. Add built-in argument handling: Done. Add meta-data output: Well, not quite. I’m working on that though, and now I can add it without requiring anything from template authors.
But here are some other things I did get done. Add Bash shell completion: Done. Added Help command (that works with all other commands): Done. Created command class decorators: Done. Support templates installed in any XDG_DATA_DIRS: Done. Allow template overriding on the command-line: Done. Started documentation for template authors: Done.
With the core of the Quickly reboot nearly done, focus can now turn to the templates. At this point I’m reasonably confident that the API used by the templates and commands won’t change (at least not much). The ‘create’ and ‘run’ commands from the ubuntu-application template have already been ported, I used those to help develop the API. But that leaves all the rest of the commands that need to be updated (see list at the bottom of this post). If you want to help make application development in Ubuntu better, this is a great way to contribute.
For now, I want to focus on finishing the port of the ubuntu-application template. This will reveal any changes that might still need to be made to the new API and code, without disrupting multiple templates.
The first thing you need to do is understand how the new Quickly handles templates and commands. I’ve started on some documentation for template developers, with a Getting Started guide that covers the basics. You can also find me in #quickly in Freenode IRC for help.
Next you’ll need to find the code for the command you want to port. If you already have the current Quickly installed, you can find them in /usr/share/quickly/templates/ubuntu-application/, or you can bzr branch lp:quickly to get the source.
The commands are already in Python, but they are stand-alone scripts. You will need to convert them into Python classes, with the code to be executed being called in the run() method. You can add your class to the ./data/templates/ubuntu-application/commands.py file in the new Quickly branch (lp:quickly/reboot). Then submit it as a merge proposal against lp:quickly/reboot.
So here’s the full list of ubuntu-application template commands. I’ll update this list with progress as it happens. If you want to help, grab one of the TODO commands, and start porting. Email me or ping me on IRC if you need help.
During this latest round of arguing over the inclusion of Amazon search results in the Unity Dash, Alan Bell pointed out the fact that while the default scopes shipped in Ubuntu were made to check the new privacy settings, we didn’t do a very good job of telling third-party developers how to do it.
(Update: I was told a better way of doing this, be sure to read the bottom of the post before implementing it in your own code)
Since I am also a third-party lens developer, I decided to add it to my come of my own code and share how to do it with other lens/scope developers. It turns out, it’s remarkably easy to do.
Since the privacy setting is stored in DConf, which we can access via the Gio library, we need to include that in our GObject Introspection imports:
from gi.repository import GLib, Unity, Gio
Then, before performing a search, we need to fetch the Unity Lens settings:
lens_settings = Gio.Settings(‘com.canonical.Unity.Lenses’)
The key we are interested in is ’remote-content-search’, and it can have one of two value, ‘all’ or ‘none’. Since my locoteams-scope performs only remote searches, by calling the API on http://loco.ubuntu.com, if the user has asked that no remote searches be made, this scope will return without doing anything.
And that’s it! That’s all you need to do in order to make your lens or scope follow the user’s privacy settings.
Now, before we get to the comments, I’d like to kindly point out that this post is about how to check the privacy setting in your lens or scope. It is not about whether or not we should be doing remote searches in the dash, or how you would rather the feature be implemented. If you want to pile on to those argument some more, there are dozens of open threads all over the internet where you can do that. Please don’t do it here.
I wasn’t aware, but there is a PreferencesManager class in Unity 6 (Ubuntu 12.10) that lets you access the same settings:
You should use this API instead of going directly to GSettings/DConf.Read more
The Ubuntu Skunkworks program is now officially underway, we have some projects already staffed and running, and others where I am gathering final details about the work that needs to be done. Today I decided to look at who has applied, and what they listed as their areas of interest, programming language experience and toolkit knowledge. I can’t share details about the program, but I can give a general look at the people who have applied so far.
Most applicants listed more than one area of interest, including ones not listed in Mark’s original post about Skunkworks. I’m not surprised that UI/UX and Web were two of the most popular areas. I was expecting more people interested in the artistic/design side of things though.
Not as many people listed multiple languages as multiple areas of interest. As a programmer myself, I’d encourage other programmers to expand their knowledge to other languages. Python and web languages being the most popular isn’t at all surprising. I would like to see more C/C++ applicants, given the number of important projects that are written in them. Strangely absent was any mention of Vala or Go, surely we have some community members who have some experience with those.
The technology section had the most unexpected results. Gtk has the largest single slice, sure, but it’s still much, much smaller than I would have expected. Qt/QML even more so, where are all you KDE folks? The Django slice makes sense, given the number of Python and Web applicants.
So in total, we’ve had a pretty wide variety of skills and interests from Skunkworks applicants, but we can still use more, especially in areas that are under-represented compared to the wider Ubuntu community. If you are interested, the application process is simple: just create a wiki page using the ParticipationTemplate, and email me the link (email@example.com).Read more
Now that Google+ has added a Communities feature, and seeing as how Jorge Castro has already created one for the wider Ubuntu community, I went ahead and created one specifically for our application developers. If you are an existing app developer, or someone who is interested in getting started with app development, or thinking about porting an existing app to Ubuntu, be sure to join.
Google+ communities are brand new, so we’ll be figuring out how best to use them in the coming days and weeks, but it seems like a great addition.Read more
just released a new PyGObject, for GNOME 3.7.2 which is due on Wednesday.
In this version PyGObject went through some major refactoring: Some 5.000 lines of static bindings were removed and replaced with proper introspection and some overrides for backwards compatibility, and the static/GI/overrides code structure was simplified. For the developer this means that you can now use the full GLib API, a lot of which was previously hidden by old and incomplete static bindings; also you can and should now use the officially documented GLib API instead of PyGObject’s static one, which has been marked as deprecated. For PyGObject itself this change means that the code structure is now a lot simpler to understand, all the bugs in the static GLib bindings are gone, and the GLib bindings will not go out of sync any more.
Lots of new tests were written to ensure that the API is backwards compatible, but experience teaches that ther is always the odd corner case which we did not cover. So if your code does not work any more with 3.7.2, please do report bugs.
Another important change is that if you build pygobject from source, it now defaults to using Python 3 if installed. As before, you can build for Python 2 with
PYTHON=python2.7 or the new
--with-python=python2.7 configure option.
This release also brings several marshalling fixes, docstring improvements, support for code coverage, and other bug fixes.
Thanks to all contributors!
Summary of changes (see changelog for complete details):
A few weeks ago, Canonical founder Mark Shuttleworth announced a new project initiative dubbed “skunk works”, that would bring talented and trusted members of the Ubuntu community into what were previously Canonical-only development teams working on some of the most interesting and exciting new features in Ubuntu.
Since Mark’s announcement, I’ve been collecting the names and skill sets from people who were interested, as well as working with project managers within Canonical to identify which projects should be made part of the Skunk Works program. If you want to be added to the list, please create a wiki page using the SkunkWorks/ParticipationTemplate template and send me the link in an email (firstname.lastname@example.org). If you’re not sure, continue reading to learn more about what this program is all about.
Traditionally, skunk works programs have involved innovative or experimental projects lead by a small group of highly talented engineers. The name originates from the Lockheed Martin division that produced such marvels as the U-2, SR-71 and F-117. For us, it is going to focused on launching new projects or high-profile features for existing projects. We will follow the same pattern of building small, informal, highly skilled teams that can work seamlessly together to produce the kinds of amazing results that provide those “tada” moments.
Canonical is, despite what some may say, an open source company. Skunk Works projects will be no exception to this, the final results of the work will be released under an appropriate open license. So why keep it secret? One of the principle features of a traditional skunk works team is autonomy, they don’t need to seek approval for or justify their decisions, until they’ve had a chance to prove them. Otherwise they wouldn’t be able to produce radically new designs or ideas, everything would either be watered down for consensus, or bogged down by argument. By keeping initial development private, our skunk works teams will be able to experiment and innovate freely, without having their work questioned and criticized before it is ready.
Our Skunk Works is open to anybody who wants to apply, but not everybody who applies will get in on a project. Because skunk works teams need to be very efficient and independent, all members need to be operating on the same page and at the same level in order to accomplish their goals. Mark mentioned that we are looking for “trusted” members of our community. There are two aspects to this trust. First, we need to trust that you will respect the private nature of the work, which as I mentioned above is crucial to fostering the kind of independent thinking that skunk works are famous for. Secondly, we need to trust in your ability to produce the desired results, and to work cooperatively with a small team towards a common goal.
We are still gathering candidate projects for the initial round of Skunk Works, but we already have a very wide variety. Most of the work is going to involve some pretty intense development work, both on the front-end UI and back-end data analysis. But there are also projects that will require a significant amount of new design and artistic work. It’s safe to say that the vast majority of the work will involve creation of some kind, since skunk works projects are by their nature more on the “proof” stage of development rather than “polish”. Once you have had a chance to prove your work, it will leave the confines of Skunk Works and be made available for public consumption, contribution, and yes, criticism.
Still interested? Great! In order to match people up with projects they can contribute to, we’re asking everybody to fill out a short Wiki page detailing their skill sets and relevant experience. You can use the SkunkWorks/ParticipationTemplate, just fill it in with your information. Then send the link to the new page to my (email@example.com) and I will add you to my growing list of candidates. Then, when we have an internal project join the Skunk Works, I will give the project lead a list of people who’s skills and experience match the project’s need, and we will pick which ones to invite to join the team. Not everybody will get assigned to a project, but you won’t be taken off the list. If you revise your wiki page because you’ve acquired new skills or experience, just send me another email letting me know.Read more
I just released PyGObject 3.4.1, in time for the GNOME 3.6.1 release on Wednesday.
This version provides a nice set of bug fixes. no API changes.
Thanks to all contributors!
Complete list of changes:
Well, we did it. The six members of the Canonical Community Team stayed awake and (mostly) online for 24 straight hours, all for your entertainment and generous donations. A lot of people gave a lot over the last week, both in terms of money and time, and every one of you deserves a big round of applause.
First off, I wanted to thank (blame) our fearless leader, Jono Bacon, for bringing up this crazy idea in the first place. He is the one who thought we should do something to give back to other organizations, outside of our FLOSS eco-system. It’s good to remind us all that, as important as our work is, there are still things so much more important. So thanks, Jono, for giving us a chance to focus some of our energy on the things that really matter.
I also need to thank the rest of my team, David Planella, Jorge Castro, Nick Skaggs and Daniel Holbach, for keeping me entertained and awake during that long, long 24 hours. There aren’t many people I could put up with for that long, I’m glad I work in a team full of people like you. And most importantly, thanks to all of our families for putting up with this stunt without killing us on-air.
Before we started this 24-hour marathon, I sent a challenge to the Debian community. I said that if I got 5 donations from their community, I would wear my Debian t-shirt during the entire broadcast. Well, I should have asked for more, because it didn’t take long before I had more than that, so I was happily sporting the Debian logo for 24 hours (that poor shirt won’t ever be the same).
I wasn’t the only one who put a challenge to the Debian community. Nick made a similar offer, in exchange for donations he would write missing man pages, and Daniel did the same by sending patches upstream. As a result, the Debian community made an awesome showing in support of our charities.
The biggest thanks, of course, go out to all of those who donated to our charities. Because of your generosity we raised well over £5000, with the contributions continuing to come in even after we had all finally gone to bed. As of right now, our total stands at £ 5295.70 ($8486). In particular, I would like to thank those who helped me raise £739.13 ($1184) for the Autism Research Trust:
And a very big thank you to my brother, Brian Hall, who’s donation put us over £5000 when we only had about an hour left in the marathon. And, in a particularly touching gesture of brotherly-love, his donation came with this personal challenge to me:
So here it is. The things I do for charity.Read more
Back in San Francisco, during UDS-Q, we had a discussion about the need for better online documentation for the various APIs that application developers use to write apps for Ubuntu. The Ubuntu App Showdown and subsequent AppDevUploadProcess spec work has consumed most of my time since then, but I was able to start putting together a spec for such a site. The App Showdown feedback we got from our developers survey highlighted the need, as lack of good Gtk documentation for Python was one of the most common problems people experienced, giving it a little more urgency.
Fortunately, Alberto Ruiz was at UDS, and told me about a project he had started for Gnome called Gnome Developer Network (GDN for short). Alberto had already done quite a bit of work on the database models and GObject Instropection parsing needed to populate it. The plan is to use GDN as the database and import process, and build a user-friendly web interface on top of that, linking in external resources like tutorials and AskUbuntu questions, as well as user submitted comments and code snippets.
The GDN code and some very basic template are already available. You can get the code from bzr with
bzr branch lp:ubuntu-api-website and following the instructions in the DEVELOPMENT file. I’ll also be running a live App Developer Q&A Session at 1700 UTC today (September 19th), and would be happy to help anybody get the code up and running during that time.
More than a few, actually. As part of our ongoing focus on App Developers, and helping them get their apps into the Ubuntu Software Center, we need to keep the Application Review Board (ARB) staffed and vibrant. Now that the App Showdown contest is over, we need people to step up and fill the positions of those members who’s terms are ending. We also want to grow the community of app reviewers that work with the ARB to process all of the submissions that are coming in to the MyApps portal.
Two of the existing members, Bhavani Shankar and Andrew Mitchell, will be continuing to serve on the board, and Alessio Treglia will be joining them. But we still need four more members in order to fill the full 7 seats on the board. ARB applicants must be Ubuntu Members, Ubuntu Developers, and should create a wiki page for their application.
ARB members help application developers get their apps into Software Center by reviewing their package, providing support and feedback where needed, and finally voting to approve the app’s publication. You should be able to dedicate a few hours each week to reviewing apps in the queue, and discussing them on IRC and the ARB’s mailing list.
If you would like to apply, you can contact the current ARB members on #ubuntu-arb on Freenode IRC, or the team mailing list (app-review-board at lists.ubuntu.com). The current term will expire at the end of the month, so be sure to get your applications in as soon as you can.
In addition to the 7 members of the ARB itself, we are building a community of volunteers to help review submitted packages, and work with the author to make the necessary changes. There are no limits or restrictions on members of this community, though a rough knowledge of packaging will surely help. This group doesn’t vote on applications, but they are essential to helping get those applications ready for a vote.
The ARB helpers community was launched in response to the overwhelming number of submissions that came in during the App Showdown competition. Daniel Holbach put together a guide for new contributors to help them get started reviewing apps, and you can still follow those same steps if you would like to help out.
Again, if you would like to get involved with this community, you should join #ubuntu-arb on Freenode IRC, or contact the mailing list (app-review-board at lists.ubuntu.com).Read more
For the past several week, David Planella, Jono Bacon and I have been drafting a spec that proposes a radically different approach to getting desktop applications into the Ubuntu Software Center. Now, there’s nothing that annoys me more than somebody proposing radical changes for no reason, and without giving much thought as to how it would actually be done. So I wanted to write down, here, both the justification for this proposal, and the process that we went through in drafting it.
The current process splits submissions between closed-source and commercial apps, which get reviewed by a paid team of Canonical employees, and non-commercial open source apps which are reviewed by the Application Review Board (ARB). The ARB consists of 7 volunteers from the Ubuntu community, who will review the source code and packaging of each submission. Members of the ARB are very smart, very dedicated members of the community, but they also have paying jobs, or are pursuing higher education (or both), so their time is a limited resource. The ARB process was meant to provide an easier route for app developers than the more rigorous process that distro packages must follow to get into the Universe repository or Debian’s archives, and in that respect it has been a success. But even with eased requirements, there was a limit to how many apps they could manually review.
The recent App Developer Showdown competition, which resulted in more than 140 new apps being submitted through our MyApps portal, showed us the limits of our current process. We even drafted a number of new volunteers to help review the incoming apps, and Daniel Holbach provided both instructions and programs to help speed things up. It took us weeks to give an initial review to all of the apps. Almost two months later and we still haven’t been able to publish more than a quarter of them. Android has seen over 9,000 new apps in a month, and I can only assume that iOS has seen similar numbers. If we can’t even scale to handle 140, something has to change.
The spec didn’t get written down all at once from some grand design. It grew organically, from a short list of general goals to the massive text it is today. In fact, the spec we ended up with is quite a bit different than the one we initially set out to write. We took our list of goals and started asking the obvious questions: what work is involved, who will it impact, and what could (will) go wrong? We could have just throw these questions out to other people, but those people are busy and have their own things they are trying to do. Before we could ask anybody else to spend time on this, we had to put in some effort ourselves.
So we answered as many of these as we could between the three of us, and those answers changed our spec accordingly. That raised more questions, and we repeated the process, updating the spec and finding more questions that needed to be answered. In the process we gained both a clearer idea of what we wanted, and a better understanding of how to get there. By the time we had answered as many as we could on our own, our list of goals had transformed into a longer list of implementation items and who would most likely be doing them.
At that point, we had a more specific direction and a pretty good idea of how much work it would take. Having done as much of the leg-work as we could, we took the implementation items, and any unanswered questions we still had, and started talking to the people who would have to implement it. Unsurprisingly, these conversations had an even bigger impact on the spec, and it underwent some pretty drastic changes as we tried to nail down the details of the implementation. Just like the previous stage, we iterated over this one multiple times until we had as many details as we could collect, and answered all of the questions that we could. At the end, we had the massive spec we announced today.
But this is just the next stage, the spec isn’t final. The three of us have answered as much as we could, the teams who will implement it have answered as much as they could, now we’re introducing it the community to gather even more details and answer even more questions. The feedback we get in this stage will go back into the spec, and very likely generate new questions and feedback, and we’ll iterate through this stage too.
The final spec, whatever it ends up being, isn’t going to be perfect, and it’s not going to make everybody happy. But we can be confident that is will be a very well thought out spec, it will be a very detailed spec, and it will allow us to accomplish the goals we set out to accomplish at the beginning of it all. It will help make Ubuntu a much more attractive platform for application developers, it will make Software Center more useful to developers and users alike, and it will make Ubuntu a better OS for all of our users.
If you have any questions or comments on the spec itself, please send them to the ubuntu-devel mailing list, not the comments section here.Read more
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.