Canonical Voices

Posts tagged with 'development'

pitti

I was asked to pour some love over autopilot-gtk, a GTK module to provide introspection of widget states to Autopilot. For those who don’t know, Autopilot is a QA tool to write automatic testing of GUI applications, without the race conditions and limitations that previous tools had with using only the ATK level. Please see the documentation and tutorial for more information. There are a lot of community members who do great things with it already, such as automating testing for Ubiquity or writing tests for GNOME applications like evince, gedit, nautilus, or Shotwell. This should now hopefully become easier.

Now autopilot-gtk has a proper testsuite, I triaged all bug reports, wrote reproducers for them, and fixed them all in today’s upload to Saucy. In particular, you can now do the following:

  • Access to the GtkBuilder names: Instead of having to find a particular widgets in terms of class, position, label contents, or other (sometimes) non-unique or unstable properties, you can now pick it by its unique and stable GtkBuilder name, which is the ID that most upstream code uses to manipulate widgets: b = self.app.select_single(BuilderName='entry_searchquery')
  • GtkTextBuffer type GObject properties are now translated into plain strings, which allows you to access the textual contents of a GtkTextView widget with my_textview.buffer (both for simple property access as well as for selecting by buffer contents).
  • GEnum and GFlags properties are now accessible. Enums are translated to strings (self.app.select_many('GtkButton', relief='GTK_RELIEF_HALF') or self.assertEqual(btn_greet.resize_mode, 'GTK_RESIZE_PARENT')), and flags are represented as a simple integer (like my_widget.events)); in theory we could represent them as string like FLAG_FOO | FLAG_BAR, but this becomes too unwieldy; for reliable identity matching one would always need to take care to sort them alphabetically, keep a consistent spacing, etc.
  • Please let me know if you need access to other types of properties, it is now quite easy to support more (as long as there is a reasonable way of mapping them to a standard D-BUS data type). So please report bugs.

    Read more
pitti

I released umockdev 0.2.6. Most importantly, this now fully works on ARM platforms, as we want to use it to write tests for/on the Ubuntu phone. I tested it on my Nexus 7, and the tests also succeed on the ARM Ubuntu builder (which are Panda boards). Fixing this revealed some interesting issues in recorded ioctl traces (as they are platform specific in some cases due to different word length) as well as kernel bugs in the Tegra drivers.

This version also fixes compatibility with older automake versions again, so that the daily builds for raring should work again.

I also have a new gvfs test case ready to commit which uses umockdev (if available) to test functionality of the gphoto backend. But that needs the new UMockdevTestbed.clear() API in 0.2.6, so I was holding that back. I will land it soon in upstream git now.

Read more
Jussi Pakkanen

The decision by Go to not provide exceptions has given rise to a renaissance of sorts to eliminate exceptions and go back to error codes. There are various reasons given, such as efficiency, simplicity and the fact that exceptions “suck”.

Let’s examine what exceptions really are through a simple example. Say we need to write code to download some XML, parse and validate it and then extract some piece of information. There are several different ways in which this can fail: network may be down, the server won’t respond, the XML is malformed and so on. Suppose then that we encounter an error. The call stack probably looks like this:

Func1 is the function that drives this functionality and Func7 is where the problem happens. In this particular case we don’t care about partial results. If we can’t do all steps, we just give up. The error propagation starts by Func7 returning an error code to Func6. Func6 detects this and returns an error to Func5. This keeps happening until Func1 gets the error and reports failure to its caller.

Should Func7 throw an exception, functions 6-2 would not need to do anything. The compiler takes care of everything, Func1 catches the exception and reports the error.

This very simple example tells us what exceptions really are: a reliable way of moving up the call stack multiple frames at a time.

It also tells us what their main feature is: they provide a way to centralise error handling in one place.

It should be noted that exceptions do not force centralised error handling. Any Function between 1 and 7 can catch any exception if that is deemed the best thing to do. The developer only needs to write code in those locations. In contrast to error codes require extra code at every single intermediate step. This might not seem so much in this particular case, after all there are only 6 functions to change. Unfortunately in reality things look like this:

That is, functions usually call several other functions to get their job done. This means that if the average call stack depth is N, the developer needs to write O(2^N) error handling stubs. They also need to be tested, which means writing tons of mock classes. If any single one of these checks is wrong or missing, the system has a latent bug.

Even worse, most error code handlers look roughly like this:

ec = do_something();
if(ec) {
  do_some_cleanup();
  return ec;
}

What this code actually does is replicate the behaviour of exceptions. The only difference is that the developer needs to write this anew every single time, which opens the door for bugs.

Design lesson to be learned

Usually when you design an API, there are two choices: either it can be very simple or feature rich. The latter usually takes more time for the API developer to get right but saves effort for its users. In the case of exceptions, it requires work in the compiler, linker and runtime. Depending on circumstances, either one of these may be a valid choice.

When choosing between these two it is often beneficial to step back and look at it from a wider perspective. If the simpler choice was taken, what would happen? If it seems that in most cases (say >80%) people would only use the simple approach to mimic the behaviour of the feature rich one, it is a pretty strong hint that you should provide the feature rich one (or maybe even both).

This problem can go the other way, too. If the framework only provides a very feature rich and complex api, which people then use to simulate the simpler approach. The price of good design is eternal vigilance.

Read more
Jussi Pakkanen

If you read discussions on the Internet about memory allocation (and who doesn’t, really), one surprising tidbit that always comes up is that in Linux, malloc never returns null because the kernel does a thing called memory overcommit. This is easy to verify with a simple test application.

#include<stdio.h>
#include<malloc.h>

int main(int argc, char **argv) {
  while(1) {
    char *x = malloc(1);
    if(!x) {
      printf("Malloc returned null.\n");
      return 0;
    }
    *x = 0;
  }
  return 1;
}

This app tries to malloc memory one byte at a time and writes to it. It keeps doing this until either malloc returns null or the process is killed by the OOM killer. When run, the latter happens. Thus we have now proved conclusively that malloc never returns null.

Or have we?

Let’s change the code a bit.

#include<stdio.h>
#include<malloc.h>

int main(int argc, char **argv) {
  long size=1;
  while(1) {
    char *x = malloc(size*1024);
    if(!x) {
      printf("Malloc returned null.\n");
      printf("Tried to alloc: %ldk.\n", size);
      return 0;
    }
    *x = 0;
    free(x);
    size++;
  }
  return 1;
}

In this application we try to allocate a block of ever increasing size. If the allocation is successful, we release the block before trying to allocate a bigger one. This program does receive a null pointer from malloc.

When run on a machine with 16 GB of memory, the program will fail once the allocation grows to roughly 14 GB. I don’t know the exact reason for this, but it may be that the kernel reserves some part of the address space for itself and trying to allocate a chunk bigger than all remaining memory fails.

Summarizing: malloc under Linux can either return null or not and the non-null pointer you get back is either valid or invalid and there is no way to tell which one it is.

Happy coding.

Read more
Jussi Pakkanen

Let’s talk about revision control for a while. It’s great. Everyone uses it. People love the power and flexibility it provides.

However, if you read about happenings from over ten years ago or so, we find that the situation was quite different. Seasoned developers were against revision control. They would flat out refuse to use it and instead just put everything on a shared network drive or used something crazier, such as the revision control shingle.

Thankfully we as a society have gone forwards. Not using revision control is a firing offense. Most people would flat out refuse to accept a job that does not use revision control regardless of anything short of a few million euros in cash up front. Everyone accepts that revision control is the building block of quality. This is good.

It is unfortunate that this view is severely lacking in other aspects of software development. Let’s take as an example tests. There are actually people, in visible places, that publicly and vocally speak against writing tests. And for some reason we as a whole sort of accept that rather and not immediately flag that out as ridiculous nonsense.

A first example was told to me by a friend working on a quite complex piece of mathematical code. When he discovered that there were no tests at all measuring that it worked he was replied this: “If you are smart enough to be hired to work on this code, you are smart enough not to need tests.” I really wish this were an isolated incident, but in my heart I know that is not the case.

The second example is a posting made a while back by a well known open source developer. It had a blanket statement saying that test driven development is bad and harmful. The main point seemed to be a false dichotomy between good software with no tests and poor software with tests.

Even if testing is done, the implementation may be just a massive bucketful of fail. As an example, here you can read how people thought audio codecs should be tested.

As long as this kind of thinking is tolerated, no matter how esteemed a person says it, we are in the same place as medicine was during the age of bloodletting and leeches. This is why software is considered to be unreliable, buggy piece of garbage that costs hundreds of millions. And the only way out of it is a change of collective attitude. Unfortunately those often take quite a long time to happen, but a man can dream, can he not?

Read more
pitti

Time for the first PyGObject release for GNOME 3.9.x! This release brings the performance optimizations (thanks to Daniel Drake), quite a lot of internal code cleanup, and various bug fixes.

Thanks to all contributors!

  • gtk-demo: Wrap description strings at 80 characters (Simon Feltman) (#698547)
  • gtk-demo: Use textwrap to reformat description for Gtk.TextView (Simon Feltman) (#698547)
  • gtk-demo: Use GtkSource.View for showing source code (Simon Feltman) (#698547)
  • Use correct class for GtkEditable’s get_selection_bounds() function (Mike Ruprecht) (#699096)
  • Test results of g_base_info_get_name for NULL (Simon Feltman) (#698829)
  • Remove g_type_init conditional call (Jose Rostagno) (#698763)
  • Update deps versions also in README (Jose Rostagno) (#698763)
  • Drop compat code for old python version (Jose Rostagno) (#698763)
  • Remove duplicate call to _gi.Repository.require() (Niklas Koep) (#698797)
  • Add ObjectInfo.get_class_struct() (Johan Dahlin) (#685218)
  • Change interpretation of NULL pointer field from None to 0 (Simon Feltman) (#698366)
  • Do not build tests until needed (Sobhan Mohammadpour) (#698444)
  • pygi-convert: Support toolbar styles (Kai Willadsen) (#698477)
  • pygi-convert: Support new-style constructors for Gio.File (Kai Willadsen) (#698477)
  • pygi-convert: Add some support for recent manager constructs (Kai Willadsen) (#698477)
  • pygi-convert: Check for double quote in require statement (Kai Willadsen) (#698477)
  • pygi-convert: Don’t transform arbitrary keysym imports (Kai Willadsen) (#698477)
  • Remove Python keyword escapement in Repository.find_by_name (Simon Feltman) (#697363)
  • Optimize signal lookup in gi repository (Daniel Drake) (#696143)
  • Optimize connection of Python-implemented signals (Daniel Drake) (#696143)
  • Consolidate signal connection code (Daniel Drake) (#696143)
  • Fix setting of struct property values (Daniel Drake)
  • Optimize property get/set when using GObject.props (Daniel Drake) (#696143)
  • configure.ac: Fix PYTHON_SO with Python3.3 (Christoph Reiter) (#696646)
  • Simplify registration of custom types (Daniel Drake) (#696143)
  • pygi-convert.sh: Add GStreamer rules (Christoph Reiter) (#697951)
  • pygi-convert: Add rule for TreeModelFlags (Jussi Kukkonen)
  • Unify interface struct to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify Python interface struct to GI marshaling code (Simon Feltman) (#693405)
  • Unify Python float and double to GI marshaling code (Simon Feltman) (#693405)
  • Unify filename to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify utf8 to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify unichar to Python GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to filename GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to utf8 GI marshaling code (Simon Feltman) (#693405)
  • Unify Python unicode to unichar GI marshaling code (Simon Feltman) (#693405)
  • Fix enum and flags marshaling type assumptions (Simon Feltman)
  • Make AM_CHECK_PYTHON_LIBS not depend on AM_CHECK_PYTHON_HEADERS (Christoph Reiter) (#696648)
  • Use distutils.sysconfig to retrieve the python include path. (Christoph Reiter) (#696648)
  • Use g_strdup() consistently (Martin Pitt) (#696650)
  • Support PEP 3149 (ABI version tagged .so files) (Christoph Reiter) (#696646)
  • Fix stack corruption due to incorrect format for argument parser (Simon Feltman) (#696892)
  • Deprecate GLib and GObject threads_init (Simon Feltman) (#686914)
  • Drop support for Python 2.6 (Martin Pitt)
  • Remove static PollFD bindings (Martin Pitt) (#686795)
  • Drop test skipping due to too old g-i (Martin Pitt)
  • Bump glib and g-i dependencies (Martin Pitt)

Read more
Jussi Pakkanen

One of the grand Unix traditions is that source code is built directly inside the source tree. This is the simple approach, which has been used for decades. In fact, most people do not even consider doing something else, because this is the way things have always been done.

The alternative to an in-source build is, naturally, an out-of-source build. In this build type you create a fresh subdirectory and all files generated during the build (object files, binaries etc) are written in that directory. This very simple change brings about many advantages.

Multiple build directories with different setups

This is the main advantage of separate build directories. When developing you typically want to build and test the software under separate conditions. For most work you want to have a build that has debug symbols on and all optimizations disabled. For performance tests you want to have a build with both debug and optimizations on. You might want to compile the code with both GCC and Clang to test compatibility and get more warnings. You might want to run the code through any one of the many static analyzers available.

If you have an in-source build, then you need to nuke all build artifacts from the source tree, reconfigure the tree and then rebuild. You also need to return the old settings because you probably don’t want to run a static analyzer on your day-to-day development work, mostly because it is up to 10 times slower than a non-optimized build.

Separate build directories provide a nice solution to this problem. Since all their state is stored in a separate build directory, you can have as many build directories per one source directory as you want. They will not stomp on each other. You only need to configure your build directories once. When you want to build any specific configuration, you just run Make/Ninja/whatever in that subdirectory. Assuming your build system is good (i.e. not Autotools with AM_MAINTAINER_MODE hacks) this will always work.

No need to babysit generated files

If you look at the .bzrignore file of a common Autotools project, it typicaly has on the order of a dozen or so rules for files such as Makefiles, Makefile.ins, libtool files and all that stuff. If your build system generates .c source files which it then compiles, all those files need to be in the ignore file. You could also have a blanket rule of ‘*.c’ but that is dangerous if your source tree consists of handwritten C source. As files come and go, the ignore file needs to be updated constantly.

With build directories all this drudgery goes away. You only need to add build directory names to the ignore file and then you are set. All new source files will show up immediately as will stray files. There is no possibility of accidentally masking a file that should be checked in revision control. Things just work.

Easy clean

Want to get rid of a certain build configuration? Just delete the subdirectory it resides in. Done! There is no chance whatsoever that any state from said build setup remains in the source tree.

Separate partitions for source and build

This gets into very specific territory but may be useful sometimes. The build directory can be anywhere in the filesystem tree. It can even be on a different partition. This allows you to put the build directory on a faster drive or possibly even on ramdisk. Security conscious people might want to put the source tree on a read-only (optionally a non-execute) file system.

If the build tree is huge, deleting it can take a lot of time. If the build tree is in a BTRFS subvolume, deleting all of it becomes a constant time operation. This may be useful in continuous integration servers and the like.

Conclusion

Building in separate build directories brings about many advantages over building in-source. It might require some adjusting, though. One specific thing that you can’t do any more is cd into a random directory in your source tree and typing make to build only that subdirectory. This is mostly an issue with certain tools with poor build system integration that insist on running Make in-source. They should be fixed to work properly with out-of-source builds.

If you decide to give out-of-tree builds a try, there is one thing to note. You can’t have in-source and out-of-source builds in the same source tree at the same time (though you can have either of the two). They will interact with each other in counter-intuitive ways. The end result will be heisenbugs and tears, so just don’t do it. Most build systems will warn you if you try to have both at the same time. Building out-of-source may also break some applications, typically tests, that assume they are being run from within the source directory.

Read more
Jussi Pakkanen

There has been gradual movement towards CMake in Canonical projects. I have inspected quite a lot of build setups written by many different people and certain antipatterns and inefficiencies seem to pop up again and again. Here is a list of the most common ones.

Clobbering CMAKE_CXX_FLAGS

Very often you see constructs such as these:

set(CMAKE_CXX_FLAGS "-Wall -pedantic -Wextra")

This seems to be correct, since this command is usually at the top of the top level CMakeLists.txt. The problem is that CMAKE_CXX_FLAGS may have content that comes from outside CMakeLists. As an example, the user might set values with the ccmake configuration tool. The construct above destroys those settings silently. The correct form of the command is this:

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -pedantic -Wextra")

This preserves the old value.

Adding -g manually

People want to ensure that debugging information is on so they set this compiler flag manually. Often several times in different places around the source tree.

It should not be necessary to do this ever.

CMake has a concept of build identity. There are debug builds, which have debug info and no optimization. There are release builds which don’t have debug info but have optimization. There are relwithdebinfo builds which have both. There is also the default plain type which does not add any optimization or debug flags at all.

The correct way to enable debug is to specify that your build type is either debug or relwithdebinfo. CMake will then take care of adding the proper compiler flags for you. Specifying the build type is simple, just pass -DCMAKE_BUILD_TYPE=debug to CMake when you first invoke it. In day to day development work, that is what you want over 95% of the time so it might be worth it to create a shell alias just for that.

Using libraries without checking

This antipattern shows itself in constructs such as this:

target_link_libraries(myexe -lsomesystemlib)

CMake will pass the latter one to the compiler command line directly so it will link against the library. Most of the time. If the library does not exist, the end result is a cryptic linker error. The problem is worse still if the compiler in question does not understand the -l syntax for libraries (unfortunately those exist).

The solution is to use find_library and pass the result from that to target_link_libraries. This is a bit more work up front but will make the system more pleasant to use.

Adding header files to target lists

Suppose you have a declaration like this:

add_executable(myexe myexe.c myexe.h)

In this case myexe.h is entirely superfluous. It can just be dropped. The reason people put that in is probably because they think it is required to make CMake rebuild the target in case the header is changed. That is not necessary. CMake will use the dependency information from Gcc and add this dependency automatically.

The only exception to this rule is when you generate header files as part of your build. Then you should put them in the target file list so CMake knows to generate them before compiling the target.

Using add_dependencies

This one is simple. Say you have code such as this:

target_link_libraries(myexe mylibrary)
add_dependencies(myexe mylibrary)

The second line is unnecessary. CMake knows there is a dependency between the two just based on the first line. There is no need to say it again, so the second line can be deleted.

Add_dependencies is only required in certain rare and exceptional circumstances.

Invoking make

Sometimes people use custom build steps and as part of those invoke “make sometarget”. This is not very clean on many different levels. First of all, CMake has several different backends, such as Ninja, Eclipse, XCode and others which do not use Make to build. Thus the invocation will fail on those systems. Hardcoding make invocations in your build system prevents other people from using their preferred backends. This is unfortunate as multiple backends are one of the main strengths of CMake.

Second of all, you can invoke targets directly in CMake. Most custom commands have a DEPENDS option that can be used to invoke other targets. That is the preferred way of doing this as it works with all backends.

Assuming in-source builds

Unix developers have decades worth of muscle memory telling them to build their code in-source. This leaks into various place. As an example, test data file may be accessed assuming that the source is built in-tree and that the program is executed in the directory it resides in.

Out-of-source builds provide many benefits (which I’m not going into right now, it could fill its own article). Even if you personally don’t want to use them, many other people will. Making it possible is the polite thing to do.

Inline sed and shell pipelines

Some builds require file manipulation with sed or other such shell tricks. There’s nothing wrong with them as such. The problem comes from embedding them inside CMakeLists command invocations. They should instead be put into their own script files which are then called from CMake. This makes them more easily documentable and testable.

Invoking CMake multiple times

This last piece is not a coding antipattern but a usage antipattern. People seem to run the CMake binary over again after doing changes to their build systems. This is not necessary. For any given build directory, you only ever need to run the cmake binary once: when you first configure your project.

After the first configuration the only command you ever need to run is your build command (be it make or ninja). It will detect changes in the build system, automatically regenerate all necessary files and compile the end result. The user does not need to care. This behaviour is probably residue from being burned several times by Autotools’ maintainer mode. This is understandable, but in CMake this feature will just work.

Read more
Daniel Holbach

The Ubuntu Developer Advisory Team has been in place for two or three release cycles already and it’s been a fun journey so far. We’ve got in touch with many many new contributors and old contributors as well. If you don’t know what this team does, here’s what our wiki page has to say:

We

  • Reach out to new contributors, thank them for their work and get feedback.
  • Reach out to people who might be ready to apply for upload rights and help them.
  • Reach out to contributors that went inactive and get feedback from them and offer help.

I personally found this very rewarding as I got to talk to many new contributors and see how they feel about Ubuntu Development.

You can help!

If the above sounds interesting to you and you enjoy engaging socially, if you have made a few experiences in Ubuntu Development and want to help out, please talk to me or comment below. It’d be great to have you on board!

Read more
pitti

I just pushed out a new python-dbusmock release 0.6.

Calling a method on the mock now emits a MethodCalled signal on the org.freedesktop.DBus.Mock interface. In some cases this is easier to track than parsing the mock’s log or using GetMethodCalls. Thanks to Lars Uebernickel for this.

DBusMockObject.AddTemplate() and DBusTestCase.spawn_server_template() can now load local templates from your own project by specifying a path to a *.py file as template name. Thanks to Lucas De Marchi for this feature.

I also wrote a quite comprehensive template for systemd’s logind. It stubs out the power management functionality as well as user/seat/session objects, and is convincing enough for loginctl. Some bits like AttachDevice is missing, as this sounds unlikely to be required for D-BUS mock tests, but please let me know if you need anything else.

The mock processes now terminate automatically if their connected D-BUS goes down, as advertised in the documentation.

You can get the new tarball from Launchpad, and I uploaded it to Debian experimental now.

Enjoy!

Read more
pitti

I just released a new PyGObject for GNOME 3.7.92. This fixes a couple of crashes and marshalling errors again, but most importantly got a change to automatically mute the PyGIDeprecationWarnings for stable versions. Please run pythonX.X with the -Wd option to still be able to see them.

We got through all our bugs that were milestoned for GNOME 3.8 and don’t want to or plan to introduce any major behavioural change at this point, so barring catastrophes this is what will be in GNOME 3.8.0.

Thanks to all contributors!

  • Fix stack smasher when marshaling enums as a vfunc return value (Simon Feltman) (#637832)
  • Change base class of PyGIDeprecationWarning based on minor version (Simon Feltman) (#696011)
  • autogen.sh: Source gnome-autogen to fix out of source builddir (Alban Browaeys) (#694889)
  • pygtkcompat: Make gdk.Window.get_geometry return tuple of 5 (Simon Feltman)
  • pygtkcompat: Initialize hint to zero in set_geometry_hints (Simon Feltman)
  • Remove incorrect bounds check with property helper flags (Simon Feltman)
  • Fix crash when setting property of type object to an incorrect type (Simon Feltman) (#695420)
  • Remove skipping of object property tests (Simon Feltman) (#695420)
  • Give more informative error when setting property to incorrect type (Simon Feltman) (#695420)

Read more
pitti

I just found out that PyGObject 3.7.91 as released yesterday breaks GEdit plugins. I just pushed out 3.7.91.1 to unbreak this again, sorry about that!

Read more
pitti

Many libraries build a GObject introspection repository (*.gir) these days which allows the library to be used from many scripting (Python, JavaScript, Perl, etc.) and other (e. g. Vala) languages without the need for manually writing bindings for each of those.

One issue that I hear surprisingly often is “there is zero documentation for those bindings”. Tools for building documentation out of a .gir have existed for a long time already, just far too many people seem to not know about them.

For example, to build Yelp XML documentation out of the libnotify bindings for Python:

  $ g-ir-doc-tool --language=Python -o /tmp/notify-doc /usr/share/gir-1.0/Notify-0.7.gir

Then you can call yelp /tmp/notify-doc to browse the documentation. You can of course also use the standard Mallard tools to convert them to HTML for sticking them on a website:

  $ cd /tmp/notify-doc
  $ yelp-build html .

Admittedly they are far from pretty, and there are still lots of refinements that should be done for the documentation itself (like adding language specific examples) and also for the generated result (prettification, dynamic search, and what not), but it’s certainly far from “nothign”, and a good start.

If you are interested in working on this, please show up in #introspection or discuss it on bugzilla, desktop-devel-list@, or the library specific lists/bug trackers.

Read more
pitti

I just released a new PyGObject for GNOME 3.7.91. This brings some marshalling fixes, plugs tons of memory leaks, and now raises a Python DeprecationWarning when your code calls a method which is marked as deprecated in the typelib. Please note that Python hides them by default, so if you are interested in those you need to run python with the -Wd option.

Thanks to all contributors!

  • Fix many memory leaks (#675726, #693402, #691501, #510511, #672224, and several more which are detected by our test suite) (Martin Pitt)
  • Dot not clobber original Gdk/Gtk functions with overrides (Martin Pitt) (#686835)
  • Optimize GValue.get/set_value by setting GValue.g_type to a local (Simon Feltman) (#694857)
  • Run tests with G_SLICE=debug_blocks (Martin Pitt) (#691501)
  • Add override helper for stripping boolean returns (Martin Pitt) (#694431)
  • Drop obsolete pygobject_register_sinkfunc() declaration (Martin Pitt) (#639849)
  • Fix marshalling of C arrays with explicit length in signal arguments (Martin Pitt) (#662241)
  • Fix signedness, overflow checking, and 32 bit overflow of GFlags (Martin Pitt) (#693121)
  • gi/pygi-marshal-from-py.c: Fix build on Visual C++ (Chun-wei Fan) (#692856)
  • Raise DeprecationWarning on deprecated callables (Martin Pitt) (#665084)
  • pygtkcompat: Add Widget.window, scroll_to_mark, and window methods (Simon Feltman) (#694067)
  • pygtkcompat: Add Gtk.Window.set_geometry_hints which accepts keyword arguments (Simon Feltman) (#694067)
  • Ship pygobject.doap for autogen.sh (Martin Pitt) (#694591)
  • Fix crashes in various GObject signal handler functions (Simon Feltman) (#633927)
  • pygi-closure: Protect the GSList prepend with the GIL (Olivier Crête) (#684060)
  • generictreemodel: Fix bad default return type for get_column_type (Simon Feltman)

Read more
beuno

There seems to be quite a bit of buzz around Yahoo! effectively laying off remote workers (making them choose to start going to an office or resign), and I've read different perspectives on the subject, for and against remote working.
Having worked at Canonical for over 4 years, and in open source projects for quite a bit longer than that, my knee-jerk reaction is that the folks crying out that remote working just isn't as productive as working in an office is pretty short-sighted.
Canonical has hundreds of employees working remotely, far more than working in an office, and it seems like we're generally a very productive company. We take on huge competitors who have ten times the amount of people working on any given project, and we put up a pretty good fight. So I can tell you remote working is full of awesome for both the company (productivity, get to choose from a huge pool of talent) and the employee (no commute, less distractions).
I also think that the fact that open source projects are taking over the world at an incredible pace is a pretty huge testament to just how great remote working can be. This is even an extreme case where people aren't even available on a regular schedule with much tighter and clearer shared goals.

 

All that said, there are several ways things can go wrong with remote working.

Thoughtlessly mixing remote and co-located teams. All-remote and all co-located tends to work out easier. Mixing these things without having a clear plan on how communication is going to work is most likely going to end up badly. The co-located team will tend to talk to each other in the hallways and not bring the people who are remote into the loop, mostly because of the extra cost of communication there. If making decisions in person is accepted, and there are no guidelines in place to document and open up the discussion to the full audience, then it's going to fail. Regardless of remote-or-not, documenting these things is good practice, it provides traceability and there's less room for people to go away with different interpretations.

Hiring remote workers that are not generally self-directed. I can't stress this point enough. Remote working isn't for everybody, you have to make sure the people who are working remotely are generally happy making decisions on their own on a daily basis, can push through problems without a lot of hand-holding and are good at flagging problems when they see one. These types of people are great to have on site as well, but in a remote situation this is a non-negotiable skill.

Unclear goals as a team or company. If what people are suppose to be doing isn't crystal clear to everybody involved remote working is going to be very messy. Strongly self-directed people are going to push forward with what they think is the right thing to do (based off of incomplete information), and people less strongly independent are going to be reading a lot of RSS feeds.

 

I also think there are some common sense arguments against remote working that are actually an argument in favor of it.

Slackers will slack harder when at home. So, if you're at home, who's going to know if you spent your morning watching TV or thinking about a really hard problem? When you're at the office, it's much easier to check up on what you're doing with your time. I think that if you have an employee that you need to check up on what he's doing with his time, you have a problem. The answer is not going to be to put him in an office and get him to learn how to alt-tab very quickly to an IDE when you walk by. You should be working with them to make sure their performance is adequate. If it's not, and you can't seem to find a way around it, fire him. Keeping him around and force-feeding work is a huge waste of time and money. Slackers are going to slack harder at home, use that to your advantage to get rid of people who aren't up to task or don't care anymore quicker.

Communication is more expensive. It is. It also forces people to learn how to communicate better, more concisely, and in a way that's generally documented. While you can easily have calls, in the end you need to email a list or some form of communication that reaches everybody. So there's a short-term cost for a long-term benefit. You may need that short-term benefit right now, in which case you bring people together for a week or two, spend some of that money you've saved on infrastructure, and push things forward.

 

So, in general I think having remote workers forces a company to have clearer, well-communicated goals, better documentation on decisions, hiring driven and self-directed people makes you think long and hard about your processes and opens up to hiring from a much larger pool of people (all over the world!). I think those are great things to have pressuring you consistently, and will make you a better company for it.
Like everything else, if you have remote workers and pretend they are the same as co-located it's going to fail.

Read more
pitti

Hot on the heels of yesterday’s big 0.2 release, I pushed out umockdev 0.2.1 with a couple of bug fixes:

  • umockdev-wrapper: Use exec to avoid keeping the shell process around and make killing the subprogram from outside work properly.
  • Fix building with automake 1.12, thanks Peter Hutterer.
  • Support opening several netlink sockets (i. e. udev monitors) at the same time.
  • Fix building with older kernels which don’t have the EVIOCGMTSLOTS ioctl yet.

This fixes the “bind: address already in use” errors that were popping up in X.org and upower when running under umockdev, and finally gets us working packages for Ubuntu 12.04 LTS (in the daily-builds PPA).

Read more
pitti

I just released umockdev 0.2.

The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:

  $ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev
  # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null

and load that back into a testbed with X.org using the dummy driver:

  cat <<EOF > xorg-dummy.conf
  Section "Device"
        Identifier "test"
        Driver "dummy"
  EndSection
  EOF

  $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \
       Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5

Then e. g. DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won’t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user.

This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on.

This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that.

Attention: This version does not work any more with recorded ioctl files from version 0.1.

More detailled list of changes:

  • umockdev-run: Fix running of child program to keep stdin.
  • preload: Fix resolution of “/dev” and “/sys”
  • ioctl_tree: Fix endless loop when the first encountered ioctl was unknown
  • preload: Support opening a /dev node multiple times for ioctl emulation (issue #3)
  • Fix parallel build (issue #2)
  • Support xz compressed ioctl files in umockdev_testbed_load_ioctl().
  • Add example umockdev and ioctl files for a gphoto camera and an MTP capable mobile phone.
  • Fix building with automake 1.11.3 and Vala 0.16.
  • Generalize ioctl recording and emulation for ioctls with simple structs, i. e. no pointer fields. This makes it much easier to add more ioctls in the future.
  • Store return values of ioctls in records, as they are not always 0 (like EVIOCGBIT)
  • Add support for ioctl ranges (like EVIOCGABS) and ioctls with variable length (like EVIOCGBIT).
  • Add all reading evdev ioctls, for recording and mocking input devices like touch pads, touch screens, or keyboards. (issue #1)

Read more
pitti

I just released a new PyGObject for GNOME 3.7.90, with a nice set of bug fixes and some internal code cleanup. Thanks to all contributors!

  • overrides: Fix inconsistencies with drag and drop target list API (Simon Feltman) (#680640)
  • pygtkcompat: Add pygtk compatible GenericTreeModel implementation (Simon Feltman) (#682933)
  • overrides: Add support for iterables besides tuples for TreePath creation (Simon Feltman) (#682933)
  • Prefix __module__ attribute of function objects with gi.repository (Niklas Koep) (#693839)
  • configure.ac: only enable code coverage when available (Jonathan Ballet) (#693328)
  • Correctly set properties on object with statically defined properties (Jonathan Ballet) (#693618)
  • autogen.sh: Use gnome-autogen.sh (Martin Pitt) (#693328)
  • Fix reference leaks with transient floating objects (Simon Feltman) (#687522)

Read more
Jussi Pakkanen

For a few times now Thomas Bushnell of Google has given a presentation at UDS about Google’s private Ubuntu fork. One of the interesting tidbits he mentions is that deploying a system update that requires rebooting costs the company one million dollars in lost productivity.

This gives us a nice metric to evaluate how much other operations at googleplex might cost. If we assume that a reboot takes five minutes per person, then it follows that taking one minute of all Google staff costs two hundred thousand dollars. Let’s use this piece of information to estimate how much money configure scripts are costing in lost productivity.

The duration of one configure script varies wildly. It is rare to go under 30 seconds and several minutes is not uncommon. Let’s use a round lowball estimate of one minute on average.

That million dollars accounts for every single Google employee. What percentage of those have to build source code with Autotools is unknown, so let’s say half. It is likewise hard to estimate how often these people run configure scripts, but let’s be on the safe side and say once per day. Similarly let’s assume 200 working days in one year.

Crunching all the numbers gives us the final result. That particular organisation is paying $20 million every year just to have their engineers sit still watching text scroll by in a terminal.

Read more
pitti

What is this?

umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.

This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard make check.

After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!

Component overview

umockdev consists of the following parts:

  • The umockdev-record program generates text dumps (conventionally called *.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called *.ioctl).
  • The libumockdev library provides the UMockdevTestbed GObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load *.umockdev and *.ioctl records into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in docs/reference in the source directory.
  • The libumockdev-preload library intercepts access to /sys, /dev/, the kernel’s netlink socket (for uevents) and ioctl() and re-routes them into the sandbox built by libumockdev. You don’t interface with this library directly, instead you need to run your test suite or other program that uses libumockdev through the umockdev-wrapper program.
  • The umockdev-run program builds a sandbox using libumockdev, can load *.umockdev and *.ioctl files into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need umockdev-wrapper with this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.

Example: Record and replay PtP/MTP USB devices

So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:

  • Connect your digital camera, mobile phone, or other device which supports PtP or MTP, and locate it in lsusb. For example
      Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
  • Dump the sysfs device and udev properties:
      $ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
  • Now record the dynamic behaviour (i. e. usbfs ioctls) of various operations. You can store multiple different operations in the same file, which will share the common communication between them. For example:
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect
      $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
  • Now you can disconnect your device, and run the same operations in a mocked testbed. Please note that /dev/bus/usb/001/012 merely echoes what is in mobile.umockdev and it is independent of what is actually in the real /dev directory. You can rename that device in the generated *.umockdev files and on the command line.
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect
      $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders

Example: using the library to fake a battery

If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through umockdev-run. As a simple example, let’s pretend we want to write tests for upower.

Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:

  umockdev-wrapper python3 docs/examples/battery.py

and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.

The gist of it is that you create a test bed with

  UMockdevTestbed *testbed = umockdev_testbed_new ();

and add a device with certain sysfs attributes and udev properties with

    gchar *sys_bat = umockdev_testbed_add_device (
            testbed, "power_supply", "fakeBAT0", NULL,
            /* attributes */
            "type", "Battery",
            "present", "1",
            "status", "Discharging",
            "energy_full", "60000000",
            "energy_full_design", "80000000",
            "energy_now", "48000000",
            "voltage_now", "12000000",
            NULL,
            /* properties */
            "POWER_SUPPLY_ONLINE", "1",
            NULL);

You can then e. g. change an attribute and synthesize a “change” uevent with

  umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000");
  umockdev_testbed_uevent (testbed, sys_bat, "change");

With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.

Packages

I have a packaging branch for Ubuntu and a recipe to do daily builds with the latest upstream code into my daily builds PPA (for 12.10 and raring). I will soon upload it to Raring proper, too.

What’s next?

The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of umockdev-record, so any suggestion is welcome.

One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.

I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.

Where to go to

The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.

Release tarballs will be on Launchpad as well. Please file bugs and enhancement requests in the git hub tracker.

Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).

Thanks for your attention and happy testing!

Read more