Canonical Voices

Posts tagged with 'development'

Jussi Pakkanen

Let’s talk about revision control for a while. It’s great. Everyone uses it. People love the power and flexibility it provides.

However, if you read about happenings from over ten years ago or so, we find that the situation was quite different. Seasoned developers were against revision control. They would flat out refuse to use it and instead just put everything on a shared network drive or used something crazier, such as the revision control shingle.

Thankfully we as a society have gone forwards. Not using revision control is a firing offense. Most people would flat out refuse to accept a job that does not use revision control regardless of anything short of a few million euros in cash up front. Everyone accepts that revision control is the building block of quality. This is good.

It is unfortunate that this view is severely lacking in other aspects of software development. Let’s take as an example tests. There are actually people, in visible places, that publicly and vocally speak against writing tests. And for some reason we as a whole sort of accept that rather and not immediately flag that out as ridiculous nonsense.

A first example was told to me by a friend working on a quite complex piece of mathematical code. When he discovered that there were no tests at all measuring that it worked he was replied this: “If you are smart enough to be hired to work on this code, you are smart enough not to need tests.” I really wish this were an isolated incident, but in my heart I know that is not the case.

The second example is a posting made a while back by a well known open source developer. It had a blanket statement saying that test driven development is bad and harmful. The main point seemed to be a false dichotomy between good software with no tests and poor software with tests.

Even if testing is done, the implementation may be just a massive bucketful of fail. As an example, here you can read how people thought audio codecs should be tested.

As long as this kind of thinking is tolerated, no matter how esteemed a person says it, we are in the same place as medicine was during the age of bloodletting and leeches. This is why software is considered to be unreliable, buggy piece of garbage that costs hundreds of millions. And the only way out of it is a change of collective attitude. Unfortunately those often take quite a long time to happen, but a man can dream, can he not?

Read more
pitti

Time for the first PyGObject release for GNOME 3.9.x! This release brings the performance optimizations (thanks to Daniel Drake), quite a lot of internal code cleanup, and various bug fixes.

Thanks to all contributors!

  • gtk-demo: Wrap description strings at 80 characters (Simon Feltman) (#698547)
  • gtk-demo: Use textwrap to reformat description for Gtk.TextView (Simon Feltman) (#698547)
  • gtk-demo: Use GtkSource.View for showing source code (Simon Feltman) (#698547)
  • Use correct class for GtkEditable’s get_selection_bounds() function (Mike Ruprecht) (#699096)
  • Test results of g_base_info_get_name for NULL (Simon Feltman) (#698829)
  • Remove g_type_init conditional call (Jose Rostagno) (#698763)
  • Update deps versions also in README (Jose Rostagno) (#698763)
  • Drop compat code for old python version (Jose Rostagno) (#698763)
  • Remove duplicate call to _gi.Repository.require() (Niklas Koep) (#698797)
  • Add ObjectInfo.get_class_struct() (Johan Dahlin) (#685218)
  • Change interpretation of NULL pointer field from None to 0 (Simon Feltman) (#698366)
  • Do not build tests until needed (Sobhan Mohammadpour) (#698444)
  • pygi-convert: Support toolbar styles (Kai Willadsen) (#698477)
  • pygi-convert: Support new-style constructors for Gio.File (Kai Willadsen) (#698477)
  • pygi-convert: Add some support for recent manager constructs (Kai Willadsen) (#698477)
  • pygi-convert: Check for double quote in require statement (Kai Willadsen) (#698477)
  • pygi-convert: Don’t transform arbitrary keysym imports (Kai Willadsen) (#698477)
  • Remove Python keyword escapement in Repository.find_by_name (Simon Feltman) (#697363)
  • Optimize signal lookup in gi repository (Daniel Drake) (#696143)
  • Optimize connection of Python-implemented signals (Daniel Drake) (#696143)
  • Consolidate signal connection code (Daniel Drake) (#696143)
  • Fix setting of struct property values (Daniel Drake)
  • Optimize property get/set when using GObject.props (Daniel Drake) (#696143)
  • configure.ac: Fix PYTHON_SO with Python3.3 (Christoph Reiter) (#696646)
  • Simplify registration of custom types (Daniel Drake) (#696143)
  • pygi-convert.sh: Add GStreamer rules (Christoph Reiter) (#697951)
  • pygi-convert: Add rule for TreeModelFlags (Jussi Kukkonen)
  • Unify interface struct to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify Python interface struct to GI marshaling code (Simon Feltman) (#693405)
  • Unify Python float and double to GI marshaling code (Simon Feltman) (#693405)
  • Unify filename to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify utf8 to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify unichar to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to filename GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to utf8 GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to unichar GI marshaling code (Simon Feltman) (#693405)
  • Fix enum and flags marshaling type assumptions (Simon Feltman)
  • Make AM_CHECK_PYTHON_LIBS not depend on AM_CHECK_PYTHON_HEADERS (Christoph Reiter) (#696648)
  • Use distutils.sysconfig to retrieve the python include path. (Christoph Reiter) (#696648)
  • Use g_strdup() consistently (Martin Pitt) (#696650)
  • Support PEP 3149 (ABI version tagged .so files) (Christoph Reiter) (#696646)
  • Fix stack corruption due to incorrect format for argument parser (Simon Feltman) (#696892)
  • Deprecate GLib and GObject threads_init (Simon Feltman) (#686914)
  • Drop support for Python 2.6 (Martin Pitt)
  • Remove static PollFD bindings (Martin Pitt) (#686795)
  • Drop test skipping due to too old g-i (Martin Pitt)
  • Bump glib and g-i dependencies (Martin Pitt)

Read more
Jussi Pakkanen

One of the grand Unix traditions is that source code is built directly inside the source tree. This is the simple approach, which has been used for decades. In fact, most people do not even consider doing something else, because this is the way things have always been done.

The alternative to an in-source build is, naturally, an out-of-source build. In this build type you create a fresh subdirectory and all files generated during the build (object files, binaries etc) are written in that directory. This very simple change brings about many advantages.

Multiple build directories with different setups

This is the main advantage of separate build directories. When developing you typically want to build and test the software under separate conditions. For most work you want to have a build that has debug symbols on and all optimizations disabled. For performance tests you want to have a build with both debug and optimizations on. You might want to compile the code with both GCC and Clang to test compatibility and get more warnings. You might want to run the code through any one of the many static analyzers available.

If you have an in-source build, then you need to nuke all build artifacts from the source tree, reconfigure the tree and then rebuild. You also need to return the old settings because you probably don’t want to run a static analyzer on your day-to-day development work, mostly because it is up to 10 times slower than a non-optimized build.

Separate build directories provide a nice solution to this problem. Since all their state is stored in a separate build directory, you can have as many build directories per one source directory as you want. They will not stomp on each other. You only need to configure your build directories once. When you want to build any specific configuration, you just run Make/Ninja/whatever in that subdirectory. Assuming your build system is good (i.e. not Autotools with AM_MAINTAINER_MODE hacks) this will always work.

No need to babysit generated files

If you look at the .bzrignore file of a common Autotools project, it typicaly has on the order of a dozen or so rules for files such as Makefiles, Makefile.ins, libtool files and all that stuff. If your build system generates .c source files which it then compiles, all those files need to be in the ignore file. You could also have a blanket rule of ‘*.c’ but that is dangerous if your source tree consists of handwritten C source. As files come and go, the ignore file needs to be updated constantly.

With build directories all this drudgery goes away. You only need to add build directory names to the ignore file and then you are set. All new source files will show up immediately as will stray files. There is no possibility of accidentally masking a file that should be checked in revision control. Things just work.

Easy clean

Want to get rid of a certain build configuration? Just delete the subdirectory it resides in. Done! There is no chance whatsoever that any state from said build setup remains in the source tree.

Separate partitions for source and build

This gets into very specific territory but may be useful sometimes. The build directory can be anywhere in the filesystem tree. It can even be on a different partition. This allows you to put the build directory on a faster drive or possibly even on ramdisk. Security conscious people might want to put the source tree on a read-only (optionally a non-execute) file system.

If the build tree is huge, deleting it can take a lot of time. If the build tree is in a BTRFS subvolume, deleting all of it becomes a constant time operation. This may be useful in continuous integration servers and the like.

Conclusion

Building in separate build directories brings about many advantages over building in-source. It might require some adjusting, though. One specific thing that you can’t do any more is cd into a random directory in your source tree and typing make to build only that subdirectory. This is mostly an issue with certain tools with poor build system integration that insist on running Make in-source. They should be fixed to work properly with out-of-source builds.

If you decide to give out-of-tree builds a try, there is one thing to note. You can’t have in-source and out-of-source builds in the same source tree at the same time (though you can have either of the two). They will interact with each other in counter-intuitive ways. The end result will be heisenbugs and tears, so just don’t do it. Most build systems will warn you if you try to have both at the same time. Building out-of-source may also break some applications, typically tests, that assume they are being run from within the source directory.

Read more
Jussi Pakkanen

There has been gradual movement towards CMake in Canonical projects. I have inspected quite a lot of build setups written by many different people and certain antipatterns and inefficiencies seem to pop up again and again. Here is a list of the most common ones.

Clobbering CMAKE_CXX_FLAGS

Very often you see constructs such as these:

set(CMAKE_CXX_FLAGS "-Wall -pedantic -Wextra")

This seems to be correct, since this command is usually at the top of the top level CMakeLists.txt. The problem is that CMAKE_CXX_FLAGS may have content that comes from outside CMakeLists. As an example, the user might set values with the ccmake configuration tool. The construct above destroys those settings silently. The correct form of the command is this:

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -pedantic -Wextra")

This preserves the old value.

Adding -g manually

People want to ensure that debugging information is on so they set this compiler flag manually. Often several times in different places around the source tree.

It should not be necessary to do this ever.

CMake has a concept of build identity. There are debug builds, which have debug info and no optimization. There are release builds which don’t have debug info but have optimization. There are relwithdebinfo builds which have both. There is also the default plain type which does not add any optimization or debug flags at all.

The correct way to enable debug is to specify that your build type is either debug or relwithdebinfo. CMake will then take care of adding the proper compiler flags for you. Specifying the build type is simple, just pass -DCMAKE_BUILD_TYPE=debug to CMake when you first invoke it. In day to day development work, that is what you want over 95% of the time so it might be worth it to create a shell alias just for that.

Using libraries without checking

This antipattern shows itself in constructs such as this:

target_link_libraries(myexe -lsomesystemlib)

CMake will pass the latter one to the compiler command line directly so it will link against the library. Most of the time. If the library does not exist, the end result is a cryptic linker error. The problem is worse still if the compiler in question does not understand the -l syntax for libraries (unfortunately those exist).

The solution is to use find_library and pass the result from that to target_link_libraries. This is a bit more work up front but will make the system more pleasant to use.

Adding header files to target lists

Suppose you have a declaration like this:

add_executable(myexe myexe.c myexe.h)

In this case myexe.h is entirely superfluous. It can just be dropped. The reason people put that in is probably because they think it is required to make CMake rebuild the target in case the header is changed. That is not necessary. CMake will use the dependency information from Gcc and add this dependency automatically.

The only exception to this rule is when you generate header files as part of your build. Then you should put them in the target file list so CMake knows to generate them before compiling the target.

Using add_dependencies

This one is simple. Say you have code such as this:

target_link_libraries(myexe mylibrary)
add_dependencies(myexe mylibrary)

The second line is unnecessary. CMake knows there is a dependency between the two just based on the first line. There is no need to say it again, so the second line can be deleted.

Add_dependencies is only required in certain rare and exceptional circumstances.

Invoking make

Sometimes people use custom build steps and as part of those invoke “make sometarget”. This is not very clean on many different levels. First of all, CMake has several different backends, such as Ninja, Eclipse, XCode and others which do not use Make to build. Thus the invocation will fail on those systems. Hardcoding make invocations in your build system prevents other people from using their preferred backends. This is unfortunate as multiple backends are one of the main strengths of CMake.

Second of all, you can invoke targets directly in CMake. Most custom commands have a DEPENDS option that can be used to invoke other targets. That is the preferred way of doing this as it works with all backends.

Assuming in-source builds

Unix developers have decades worth of muscle memory telling them to build their code in-source. This leaks into various place. As an example, test data file may be accessed assuming that the source is built in-tree and that the program is executed in the directory it resides in.

Out-of-source builds provide many benefits (which I’m not going into right now, it could fill its own article). Even if you personally don’t want to use them, many other people will. Making it possible is the polite thing to do.

Inline sed and shell pipelines

Some builds require file manipulation with sed or other such shell tricks. There’s nothing wrong with them as such. The problem comes from embedding them inside CMakeLists command invocations. They should instead be put into their own script files which are then called from CMake. This makes them more easily documentable and testable.

Invoking CMake multiple times

This last piece is not a coding antipattern but a usage antipattern. People seem to run the CMake binary over again after doing changes to their build systems. This is not necessary. For any given build directory, you only ever need to run the cmake binary once: when you first configure your project.

After the first configuration the only command you ever need to run is your build command (be it make or ninja). It will detect changes in the build system, automatically regenerate all necessary files and compile the end result. The user does not need to care. This behaviour is probably residue from being burned several times by Autotools’ maintainer mode. This is understandable, but in CMake this feature will just work.

Read more
Daniel Holbach

The Ubuntu Developer Advisory Team has been in place for two or three release cycles already and it’s been a fun journey so far. We’ve got in touch with many many new contributors and old contributors as well. If you don’t know what this team does, here’s what our wiki page has to say:

We

  • Reach out to new contributors, thank them for their work and get feedback.
  • Reach out to people who might be ready to apply for upload rights and help them.
  • Reach out to contributors that went inactive and get feedback from them and offer help.

I personally found this very rewarding as I got to talk to many new contributors and see how they feel about Ubuntu Development.

You can help!

If the above sounds interesting to you and you enjoy engaging socially, if you have made a few experiences in Ubuntu Development and want to help out, please talk to me or comment below. It’d be great to have you on board!

Read more
pitti

I just pushed out a new python-dbusmock release 0.6.

Calling a method on the mock now emits a MethodCalled signal on the org.freedesktop.DBus.Mock interface. In some cases this is easier to track than parsing the mock’s log or using GetMethodCalls. Thanks to Lars Uebernickel for this.

DBusMockObject.AddTemplate() and DBusTestCase.spawn_server_template() can now load local templates from your own project by specifying a path to a *.py file as template name. Thanks to Lucas De Marchi for this feature.

I also wrote a quite comprehensive template for systemd’s logind. It stubs out the power management functionality as well as user/seat/session objects, and is convincing enough for loginctl. Some bits like AttachDevice is missing, as this sounds unlikely to be required for D-BUS mock tests, but please let me know if you need anything else.

The mock processes now terminate automatically if their connected D-BUS goes down, as advertised in the documentation.

You can get the new tarball from Launchpad, and I uploaded it to Debian experimental now.

Enjoy!

Read more
pitti

I just released a new PyGObject for GNOME 3.7.92. This fixes a couple of crashes and marshalling errors again, but most importantly got a change to automatically mute the PyGIDeprecationWarnings for stable versions. Please run pythonX.X with the -Wd option to still be able to see them.

We got through all our bugs that were milestoned for GNOME 3.8 and don’t want to or plan to introduce any major behavioural change at this point, so barring catastrophes this is what will be in GNOME 3.8.0.

Thanks to all contributors!

  • Fix stack smasher when marshaling enums as a vfunc return value (Simon Feltman) (#637832)
  • Change base class of PyGIDeprecationWarning based on minor version (Simon Feltman) (#696011)
  • autogen.sh: Source gnome-autogen to fix out of source builddir (Alban Browaeys) (#694889)
  • pygtkcompat: Make gdk.Window.get_geometry return tuple of 5 (Simon Feltman)
  • pygtkcompat: Initialize hint to zero in set_geometry_hints (Simon Feltman)
  • Remove incorrect bounds check with property helper flags (Simon Feltman)
  • Fix crash when setting property of type object to an incorrect type (Simon Feltman) (#695420)
  • Remove skipping of object property tests (Simon Feltman) (#695420)
  • Give more informative error when setting property to incorrect type (Simon Feltman) (#695420)

Read more
pitti

I just found out that PyGObject 3.7.91 as released yesterday breaks GEdit plugins. I just pushed out 3.7.91.1 to unbreak this again, sorry about that!

Read more
pitti

Many libraries build a GObject introspection repository (*.gir) these days which allows the library to be used from many scripting (Python, JavaScript, Perl, etc.) and other (e. g. Vala) languages without the need for manually writing bindings for each of those.

One issue that I hear surprisingly often is “there is zero documentation for those bindings”. Tools for building documentation out of a .gir have existed for a long time already, just far too many people seem to not know about them.

For example, to build Yelp XML documentation out of the libnotify bindings for Python:

  $ g-ir-doc-tool --language=Python -o /tmp/notify-doc /usr/share/gir-1.0/Notify-0.7.gir

Then you can call yelp /tmp/notify-doc to browse the documentation. You can of course also use the standard Mallard tools to convert them to HTML for sticking them on a website:

  $ cd /tmp/notify-doc
  $ yelp-build html .

Admittedly they are far from pretty, and there are still lots of refinements that should be done for the documentation itself (like adding language specific examples) and also for the generated result (prettification, dynamic search, and what not), but it’s certainly far from “nothign”, and a good start.

If you are interested in working on this, please show up in #introspection or discuss it on bugzilla, desktop-devel-list@, or the library specific lists/bug trackers.

Read more
pitti

I just released a new PyGObject for GNOME 3.7.91. This brings some marshalling fixes, plugs tons of memory leaks, and now raises a Python DeprecationWarning when your code calls a method which is marked as deprecated in the typelib. Please note that Python hides them by default, so if you are interested in those you need to run python with the -Wd option.

Thanks to all contributors!

  • Fix many memory leaks (#675726, #693402, #691501, #510511, #672224, and several more which are detected by our test suite) (Martin Pitt)
  • Dot not clobber original Gdk/Gtk functions with overrides (Martin Pitt) (#686835)
  • Optimize GValue.get/set_value by setting GValue.g_type to a local (Simon Feltman) (#694857)
  • Run tests with G_SLICE=debug_blocks (Martin Pitt) (#691501)
  • Add override helper for stripping boolean returns (Martin Pitt) (#694431)
  • Drop obsolete pygobject_register_sinkfunc() declaration (Martin Pitt) (#639849)
  • Fix marshalling of C arrays with explicit length in signal arguments (Martin Pitt) (#662241)
  • Fix signedness, overflow checking, and 32 bit overflow of GFlags (Martin Pitt) (#693121)
  • gi/pygi-marshal-from-py.c: Fix build on Visual C++ (Chun-wei Fan) (#692856)
  • Raise DeprecationWarning on deprecated callables (Martin Pitt) (#665084)
  • pygtkcompat: Add Widget.window, scroll_to_mark, and window methods (Simon Feltman) (#694067)
  • pygtkcompat: Add Gtk.Window.set_geometry_hints which accepts keyword arguments (Simon Feltman) (#694067)
  • Ship pygobject.doap for autogen.sh (Martin Pitt) (#694591)
  • Fix crashes in various GObject signal handler functions (Simon Feltman) (#633927)
  • pygi-closure: Protect the GSList prepend with the GIL (Olivier Crête) (#684060)
  • generictreemodel: Fix bad default return type for get_column_type (Simon Feltman)

Read more
beuno

There seems to be quite a bit of buzz around Yahoo! effectively laying off remote workers (making them choose to start going to an office or resign), and I've read different perspectives on the subject, for and against remote working.
Having worked at Canonical for over 4 years, and in open source projects for quite a bit longer than that, my knee-jerk reaction is that the folks crying out that remote working just isn't as productive as working in an office is pretty short-sighted.
Canonical has hundreds of employees working remotely, far more than working in an office, and it seems like we're generally a very productive company. We take on huge competitors who have ten times the amount of people working on any given project, and we put up a pretty good fight. So I can tell you remote working is full of awesome for both the company (productivity, get to choose from a huge pool of talent) and the employee (no commute, less distractions).
I also think that the fact that open source projects are taking over the world at an incredible pace is a pretty huge testament to just how great remote working can be. This is even an extreme case where people aren't even available on a regular schedule with much tighter and clearer shared goals.

 

All that said, there are several ways things can go wrong with remote working.

Thoughtlessly mixing remote and co-located teams. All-remote and all co-located tends to work out easier. Mixing these things without having a clear plan on how communication is going to work is most likely going to end up badly. The co-located team will tend to talk to each other in the hallways and not bring the people who are remote into the loop, mostly because of the extra cost of communication there. If making decisions in person is accepted, and there are no guidelines in place to document and open up the discussion to the full audience, then it's going to fail. Regardless of remote-or-not, documenting these things is good practice, it provides traceability and there's less room for people to go away with different interpretations.

Hiring remote workers that are not generally self-directed. I can't stress this point enough. Remote working isn't for everybody, you have to make sure the people who are working remotely are generally happy making decisions on their own on a daily basis, can push through problems without a lot of hand-holding and are good at flagging problems when they see one. These types of people are great to have on site as well, but in a remote situation this is a non-negotiable skill.

Unclear goals as a team or company. If what people are suppose to be doing isn't crystal clear to everybody involved remote working is going to be very messy. Strongly self-directed people are going to push forward with what they think is the right thing to do (based off of incomplete information), and people less strongly independent are going to be reading a lot of RSS feeds.

 

I also think there are some common sense arguments against remote working that are actually an argument in favor of it.

Slackers will slack harder when at home. So, if you're at home, who's going to know if you spent your morning watching TV or thinking about a really hard problem? When you're at the office, it's much easier to check up on what you're doing with your time. I think that if you have an employee that you need to check up on what he's doing with his time, you have a problem. The answer is not going to be to put him in an office and get him to learn how to alt-tab very quickly to an IDE when you walk by. You should be working with them to make sure their performance is adequate. If it's not, and you can't seem to find a way around it, fire him. Keeping him around and force-feeding work is a huge waste of time and money. Slackers are going to slack harder at home, use that to your advantage to get rid of people who aren't up to task or don't care anymore quicker.

Communication is more expensive. It is. It also forces people to learn how to communicate better, more concisely, and in a way that's generally documented. While you can easily have calls, in the end you need to email a list or some form of communication that reaches everybody. So there's a short-term cost for a long-term benefit. You may need that short-term benefit right now, in which case you bring people together for a week or two, spend some of that money you've saved on infrastructure, and push things forward.

 

So, in general I think having remote workers forces a company to have clearer, well-communicated goals, better documentation on decisions, hiring driven and self-directed people makes you think long and hard about your processes and opens up to hiring from a much larger pool of people (all over the world!). I think those are great things to have pressuring you consistently, and will make you a better company for it.
Like everything else, if you have remote workers and pretend they are the same as co-located it's going to fail.

Read more
pitti

Hot on the heels of yesterday’s big 0.2 release, I pushed out umockdev 0.2.1 with a couple of bug fixes:

  • umockdev-wrapper: Use exec to avoid keeping the shell process around and make killing the subprogram from outside work properly.
  • Fix building with automake 1.12, thanks Peter Hutterer.
  • Support opening several netlink sockets (i. e. udev monitors) at the same time.
  • Fix building with older kernels which don’t have the EVIOCGMTSLOTS ioctl yet.

This fixes the “bind: address already in use” errors that were popping up in X.org and upower when running under umockdev, and finally gets us working packages for Ubuntu 12.04 LTS (in the daily-builds PPA).

Read more
pitti

I just released umockdev 0.2.

The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:

  $ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev
  # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null

and load that back into a testbed with X.org using the dummy driver:

  cat <<EOF > xorg-dummy.conf
  Section "Device"
        Identifier "test"
        Driver "dummy"
  EndSection
  EOF

  $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \
       Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5

Then e. g. DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won’t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user.

This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on.

This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that.

Attention: This version does not work any more with recorded ioctl files from version 0.1.

More detailled list of changes:

  • umockdev-run: Fix running of child program to keep stdin.
  • preload: Fix resolution of “/dev” and “/sys”
  • ioctl_tree: Fix endless loop when the first encountered ioctl was unknown
  • preload: Support opening a /dev node multiple times for ioctl emulation (issue #3)
  • Fix parallel build (issue #2)
  • Support xz compressed ioctl files in umockdev_testbed_load_ioctl().
  • Add example umockdev and ioctl files for a gphoto camera and an MTP capable mobile phone.
  • Fix building with automake 1.11.3 and Vala 0.16.
  • Generalize ioctl recording and emulation for ioctls with simple structs, i. e. no pointer fields. This makes it much easier to add more ioctls in the future.
  • Store return values of ioctls in records, as they are not always 0 (like EVIOCGBIT)
  • Add support for ioctl ranges (like EVIOCGABS) and ioctls with variable length (like EVIOCGBIT).
  • Add all reading evdev ioctls, for recording and mocking input devices like touch pads, touch screens, or keyboards. (issue #1)

Read more
pitti

I just released a new PyGObject for GNOME 3.7.90, with a nice set of bug fixes and some internal code cleanup. Thanks to all contributors!

  • overrides: Fix inconsistencies with drag and drop target list API (Simon Feltman) (#680640)
  • pygtkcompat: Add pygtk compatible GenericTreeModel implementation (Simon Feltman) (#682933)
  • overrides: Add support for iterables besides tuples for TreePath creation (Simon Feltman) (#682933)
  • Prefix __module__ attribute of function objects with gi.repository (Niklas Koep) (#693839)
  • configure.ac: only enable code coverage when available (Jonathan Ballet) (#693328)
  • Correctly set properties on object with statically defined properties (Jonathan Ballet) (#693618)
  • autogen.sh: Use gnome-autogen.sh (Martin Pitt) (#693328)
  • Fix reference leaks with transient floating objects (Simon Feltman) (#687522)

Read more
Jussi Pakkanen

For a few times now Thomas Bushnell of Google has given a presentation at UDS about Google’s private Ubuntu fork. One of the interesting tidbits he mentions is that deploying a system update that requires rebooting costs the company one million dollars in lost productivity.

This gives us a nice metric to evaluate how much other operations at googleplex might cost. If we assume that a reboot takes five minutes per person, then it follows that taking one minute of all Google staff costs two hundred thousand dollars. Let’s use this piece of information to estimate how much money configure scripts are costing in lost productivity.

The duration of one configure script varies wildly. It is rare to go under 30 seconds and several minutes is not uncommon. Let’s use a round lowball estimate of one minute on average.

That million dollars accounts for every single Google employee. What percentage of those have to build source code with Autotools is unknown, so let’s say half. It is likewise hard to estimate how often these people run configure scripts, but let’s be on the safe side and say once per day. Similarly let’s assume 200 working days in one year.

Crunching all the numbers gives us the final result. That particular organisation is paying $20 million every year just to have their engineers sit still watching text scroll by in a terminal.

Read more
pitti

What is this?

umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.

This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard make check.

After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!

Component overview

umockdev consists of the following parts:

  • The umockdev-record program generates text dumps (conventionally called *.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called *.ioctl).
  • The libumockdev library provides the UMockdevTestbed GObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load *.umockdev and *.ioctl records into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in docs/reference in the source directory.
  • The libumockdev-preload library intercepts access to /sys, /dev/, the kernel’s netlink socket (for uevents) and ioctl() and re-routes them into the sandbox built by libumockdev. You don’t interface with this library directly, instead you need to run your test suite or other program that uses libumockdev through the umockdev-wrapper program.
  • The umockdev-run program builds a sandbox using libumockdev, can load *.umockdev and *.ioctl files into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need umockdev-wrapper with this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.

Example: Record and replay PtP/MTP USB devices

So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:

  • Connect your digital camera, mobile phone, or other device which supports PtP or MTP, and locate it in lsusb. For example
      Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
  • Dump the sysfs device and udev properties:
      $ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
  • Now record the dynamic behaviour (i. e. usbfs ioctls) of various operations. You can store multiple different operations in the same file, which will share the common communication between them. For example:
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
  • Now you can disconnect your device, and run the same operations in a mocked testbed. Please note that /dev/bus/usb/001/012 merely echoes what is in mobile.umockdev and it is independent of what is actually in the real /dev directory. You can rename that device in the generated *.umockdev files and on the command line.
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders

Example: using the library to fake a battery

If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through umockdev-run. As a simple example, let’s pretend we want to write tests for upower.

Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:

  umockdev-wrapper python3 docs/examples/battery.py

and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.

The gist of it is that you create a test bed with

  UMockdevTestbed *testbed = umockdev_testbed_new ();

and add a device with certain sysfs attributes and udev properties with

    gchar *sys_bat = umockdev_testbed_add_device (
            testbed, "power_supply", "fakeBAT0", NULL,
            /* attributes */
            "type", "Battery",
            "present", "1",
            "status", "Discharging",
            "energy_full", "60000000",
            "energy_full_design", "80000000",
            "energy_now", "48000000",
            "voltage_now", "12000000",
            NULL,
            /* properties */
            "POWER_SUPPLY_ONLINE", "1",
            NULL);

You can then e. g. change an attribute and synthesize a “change” uevent with

  umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000");
  umockdev_testbed_uevent (testbed, sys_bat, "change");

With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.

Packages

I have a packaging branch for Ubuntu and a recipe to do daily builds with the latest upstream code into my daily builds PPA (for 12.10 and raring). I will soon upload it to Raring proper, too.

What’s next?

The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of umockdev-record, so any suggestion is welcome.

One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.

I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.

Where to go to

The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.

Release tarballs will be on Launchpad as well. Please file bugs and enhancement requests in the git hub tracker.

Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).

Thanks for your attention and happy testing!

Read more
pitti

I just released a new PyGObject for GNOME 3.7.5. Unfortunately master.gnome.org is out of space right now, so I put the new tarball on my Ubuntu people account for the time being.

This again brings a nice set of memory leak and bug fixes, some more reduction of static bindings, and better support for building under Windows.

Thanks to all contributors!

  • Move various signal methods from static bindings to gi and python (Simon Feltman) (#692918)
  • GLib overrides: Support unpacking ‘maybe’ variants (Paolo Borelli) (#693032)
  • Fix ref count leak when creating pygobject wrappers for input args (Mike Gorse) (#675726)
  • Prefix names of typeless enums and flags for GType registration (Simon Feltman) (#692515)
  • Fix compilation with non-C99 compilers such as Visual C++ (Chun-wei Fan) (#692856)
  • gi/overrides/Glib.py: Fix running on Windows/non-Unix (Chun-wei Fan)
  • Do not immediately initialize Gdk and Gtk on import (Martin Pitt) (#692300)
  • Accept ±inf and NaN as float and double values (Martin Pitt) (#692381)
  • Fix repr() of GLib.Variant (Martin Pitt)
  • Fix gtk-demo for Python 3 (Martin Pitt)
  • Define GObject.TYPE_VALUE gtype constant (Martin Pitt)
  • gobject: Go through introspection on property setting (Olivier Crête) (#684062)
  • Clean up caller-allocated GValues and their memory (Mike Gorse) (#691820)
  • Use GNOME_COMPILE_WARNINGS from gnome-common (Martin Pitt)

Read more
Daniel Holbach

I mentioned the upcoming Ubuntu Developer Week in my last blog post already, but I thought I’d mention a couple more great sessions we are going to have during the event.

If you haven’t read the original post yet, here’s the quick details: running from 29th to 31st January 2013 we are going to have sessions, mostly on IRC, some on Hangouts-on-Air, where you get a introduction to all kinds of topics surrounding Ubuntu Development. After attending the sessions you will have a good idea how things roughly fit together, how to get started, who to talk to and what’s going on. It’s the perfect opportunity.

Here’s a few quotes from session leaders:

Benjamin Drung

Benjamin Drung

Benjamin Drung and Michael Bienia (of whom the internet does not seem to have any pictures whatsoever) are going to lead the Developers Roundtable and have this to say:

Do you have questions about Ubuntu development? Here you have the best opportunity to ask everything you want to know, because we will have a number of developers there who can answer your questions for you.

 

David Planella

David Planella

David Planella, who will talk about “Writing apps for Ubuntu”, says:

Learn how to use the best open source tools and technologies to write your apps on Ubuntu, both on the desktop and on the phone. You’ll be able to get your first app running in a matter of minutes!

 

 

Michael Hall

Michael Hall

Michael Hall never gets enough, so he’s giving two sessions at UDW this time around. Here’s what he has to say about Ubuntu App Developer tools: “Ubuntu provides a variety of tools to help you write and manage your applications. This session will cover everything from bootstrapping a new project, to making the final packages installable through the Software Center and everything in between.”

He will also talk about Unity integration: “The Unity desktop provides many opportunities for your application to integrate with the full user experience. Learn how to add your Application to the Unity messaging or sound indicators, add your own indicator, extend the Unity Launcher and much more.

Oliver Grawert

Oliver Grawert

We’re excited to have Oliver Grawert here, who will talk about Creating Ubuntu images and the Nexus7 images in particular. He will talk about “[t]he Ubuntu image build infrastructure at a glance, what tools do we use, how do they interact and how is the hardware set up for building the official Ubuntu images” and “[h]ow are the nexus7 images different from “normal” Ubuntu images, what can be hacked to make small modifications, how can they be re-packed or supplied with a different root file system“.

 

Alex Chiang

Alex Chiang

Alex Chiang will introduce us to the world of memory leaks and says:

As we polish and prep Ubuntu for mobile devices, a key activity will be hunting down and squashing memory leaks. This session will discuss the basic theory of leaks, introduce valgrind and our brand new apport-valgrind wrapper, and how to analyze a valgrind log file. A C/C++ background will be helpful to get the most out of this session, but is not strictly required.

Nicholas Skaggs

Nicholas Skaggs

QA mastermind Nicholas “balloons” Skaggs will talk us through “Automated Testing with autopilot” and says:

Learn about how autopilot is utilized by the unity team and quality team to test the ubuntu desktop. We’ll also provide an overview of what autopilot can do, show and run some example testcases, and give you the knowledge needed to get started writing your own autopilot testcases.

 

We are super happy to have brought this line-up of speakers to Ubuntu Developer Week this time around. Head to https://wiki.ubuntu.com/UbuntuDeveloperWeek to review the full schedule, how to join in and find out more.

Share the news with your friends and bring your questions! :-)

Read more
Daniel Holbach

The times for Ubuntu have never been more exciting. Cloud, server, desktop, laptop, TV, tablet, phone – everything runs Ubuntu or is soon going to. This makes developing Ubuntu very special, because fixes which go into Ubuntu in one place will benefit all form factors and all circumstances where it’s used. By improving Ubuntu you make millions of people around the globe happy.

During every 6 month release cycle we run Ubuntu Developer Week. It’s back and we’re going to have it from 29th January to 31st January. During the event we will have online sessions where seasoned Ubuntu developers introduce you to their respective area of expertise or to Ubuntu Development in general.

We will have many great sessions, from hands-on introduction to packaging and Ubuntu development to talks about how to quickly get involved in certain teams and interact with other projects. We will talk about tools and infrastructure, fixing bugs, finding memleaks, working with apps, create Ubuntu images and much much much more. This is the best opportunity to get a feel for how Ubuntu development works, get to know people and ask all the questions you might have.

I talked to a few session hosts, read below what they had to say.

Martin Pitt

Martin Pitt

Martin Pitt, who will talk about Automated Testing, says: “We have been, and are changing the Ubuntu development process to employ automated testing and avoid introducing regressions, and to improve confidence, focus, and development speed. In the first talk I will give an overview about the various kinds of tests that we do, so that you know where to watch out for failures and get debugging information. The second talk focuses on how to write tests, i.e. which technologies are available for e.g. hardware and GUI related behaviour or system-wide integration checks.”

 

Stefano Rivera

Stefano Rivera

Stefano Rivera, who will talk about Upstreams and Debian in particular, said: “So, working effectively in Ubuntu means also working with the teams and people upstream who wrote the software we distribute. I’ll talk about why this is important, when it’s necessary, and how to go about it. In particular, our most important upstream is Debian. Debian has a rather unusual (though powerful) bug-tracker. We’ll cover finding, submitting, and modifying bugs on it.

Chris Wilson

Chris Wilson

Chris Wilson, project leader of the Hundred Papercuts Team, says: “Unity may be the shiny new thing that everyone loves, but style without substance is only so much fluff, and the substance of Ubuntu is still its GTK-based apps. Once Hundred Paper Cuts focuses it’s attention on that substance, rubbing out the little annoyances that get under our skin every day we’re using Ubuntu. This session will introduce you to the project, how it works, and how to get involved. If you want to contribute to Ubuntu in a way that has the biggest impact on the quality of experience for the end user, then don’t miss this.

 

Bhavani Shankar

Bhavani Shankar

Bhavani Shankar, said about his talk about patch systems: “Many a time we wonder how to integrate a particular fix a particular part of the code in a program and upload into repositories without having to change code each time by hand and making it clumsy. In this session I’m going to show how to use different patch management systems that are in practice now.

About his talk about the app review process in Ubuntu he says: “In this session I’m going to explain the present workflow of reviewing apps and give an introduction into the new app dev upload process to automate reviews.

The forum we use for this is IRC, as it makes it easy to interact for many people without losing track, you can easily copy/paste and we can save the logs as searchable docs afterwards. You join in by simply connecting to #ubuntu-classroom on irc.freenode.net.

Check out the schedule and find more info on the Ubuntu wiki. We hope to see you all there, please let you friends know too. :-)

Read more
Jussi Pakkanen

We have all ran into software inconveniences. These are things that you can basically do but for some reason or another are unintuitive, hard or needlessly complex. When you air your concerns on these issues, sometimes they get fixed. At other times you get back a reply starting with “Well in general you may have a point, but …”.

The rest of the sentence is something along the lines of these examples:

“… you only have to do it once so it’s no big deal.”
“… there are cases where [e.g. autodetection] would not work so having the user [manually do task X] is the only way to be reliable.”
“… I don’t see any problem, in fact I like it the way it is.”
“… replacing [the horrible thing in question] with something better is too much work.”
“… fixing that would change things and change is bad.”
“… that is the established standard way of doing things.”
“… having a human being write that in is good, it means that the input is inspected.”

These are all fine and acceptable reasonings under certain circumstances. In fact, they are great! Let’s see what life would be like in a parallel universe where people had followed them slavishly.

Booting Linux: an adventure for the brave

You arrive to your work computer and turn it on. The LILO boot prompt comes up as usually. You type in the partition you want to boot from. This you must do every time because you might have changed partition settings and thus make LILO go out of sync. You type in your boot stanza sure in the knowledge that you get 100% rock solid boot every time.

Except when you have a typo in your boot command but a computer can’t work around that. And that happens only rarely anyways and why would you boot your computer more than once per month?

Once the kernel has loaded, you type in the kernel modules you need to use the machine. You also type in all extra parameters those modules require because some chipsets may work incorrectly sometimes (or so you have been told). So you type in some dozen strings of hexadecimal numbers and really enjoy it in a stockholmesque way.

Finally all the data is put in and the system will boot itself. Then it is time to type in your network settings. In this universe there is no Protocol to Configure network Host settings Dynamically. And why would there be? Any bug in such a system would render the entire network unusable. No, the only way to ensure that things work is to configure network settings by hand every time. Errors in settings cause only one machine to break, not the entire network. Unless you mix gateway/netmask/IP addresses but surely no-one is that stupid? And if they are, it’s their own damn fault! Having things fail spectacularly is GOOD because it shames people into doing the right thing.

After this and a couple of other simple things (each of which you only need to do once, remember) you finally have a working machine. You log on.

Into a text console, naturally. Not all people need X so it should not be started by default. Resources must be used judiciously after all.

But you only need to start X once per session so no biggie. Just like you only need to write in your monitor modeline once per X startup because autodetection might fail and cause HW failure. The modeline can not be stored in a file and used automatically because you might have plugged in a different monitor. Typing it in every time is the only way to be sure. Or would you rather die horribly in a fire caused by incorrect monitor parameters?

After all that is done you can finally fire up an XTerm to start working. But today you feel like increasing the font size a bit. This is about as simple as can get. XTerm stores a list of font sizes it will display in XResources. All you have to do is to edit them, shut down X and start it up again.

Easy as pie. And the best part: you only have to do this once.

Well, once every time you want to add new font sizes. But how often is that, really?

Summary

The examples listed above are but a small fraction of reality. If computer users had to do all “one time only” things, they would easily take the entire eight hour work day. The reason they don’t is that some developer Out There has followed this simple rule:

Almost every task that needs to be done “one time only” should, in fact, be done exactly zero times.

ZERO!

Read more