One issue that I hear surprisingly often is “there is zero documentation for those bindings”. Tools for building documentation out of a .gir have existed for a long time already, just far too many people seem to not know about them.
For example, to build Yelp XML documentation out of the libnotify bindings for Python:
$ g-ir-doc-tool --language=Python -o /tmp/notify-doc /usr/share/gir-1.0/Notify-0.7.gir
Then you can call
yelp /tmp/notify-doc to browse the documentation. You can of course also use the standard Mallard tools to convert them to HTML for sticking them on a website:
$ cd /tmp/notify-doc $ yelp-build html .
Admittedly they are far from pretty, and there are still lots of refinements that should be done for the documentation itself (like adding language specific examples) and also for the generated result (prettification, dynamic search, and what not), but it’s certainly far from “nothign”, and a good start.
If you are interested in working on this, please show up in
#introspection or discuss it on bugzilla, desktop-devel-list@, or the library specific lists/bug trackers.
I just released a new PyGObject for GNOME 3.7.91. This brings some marshalling fixes, plugs tons of memory leaks, and now raises a Python
DeprecationWarning when your code calls a method which is marked as deprecated in the typelib. Please note that Python hides them by default, so if you are interested in those you need to run python with the
Thanks to all contributors!
There seems to be quite a bit of buzz around Yahoo! effectively laying off remote workers (making them choose to start going to an office or resign), and I've read different perspectives on the subject, for and against remote working.
Having worked at Canonical for over 4 years, and in open source projects for quite a bit longer than that, my knee-jerk reaction is that the folks crying out that remote working just isn't as productive as working in an office is pretty short-sighted.
Canonical has hundreds of employees working remotely, far more than working in an office, and it seems like we're generally a very productive company. We take on huge competitors who have ten times the amount of people working on any given project, and we put up a pretty good fight. So I can tell you remote working is full of awesome for both the company (productivity, get to choose from a huge pool of talent) and the employee (no commute, less distractions).
I also think that the fact that open source projects are taking over the world at an incredible pace is a pretty huge testament to just how great remote working can be. This is even an extreme case where people aren't even available on a regular schedule with much tighter and clearer shared goals.
All that said, there are several ways things can go wrong with remote working.
Thoughtlessly mixing remote and co-located teams. All-remote and all co-located tends to work out easier. Mixing these things without having a clear plan on how communication is going to work is most likely going to end up badly. The co-located team will tend to talk to each other in the hallways and not bring the people who are remote into the loop, mostly because of the extra cost of communication there. If making decisions in person is accepted, and there are no guidelines in place to document and open up the discussion to the full audience, then it's going to fail. Regardless of remote-or-not, documenting these things is good practice, it provides traceability and there's less room for people to go away with different interpretations.
Hiring remote workers that are not generally self-directed. I can't stress this point enough. Remote working isn't for everybody, you have to make sure the people who are working remotely are generally happy making decisions on their own on a daily basis, can push through problems without a lot of hand-holding and are good at flagging problems when they see one. These types of people are great to have on site as well, but in a remote situation this is a non-negotiable skill.
Unclear goals as a team or company. If what people are suppose to be doing isn't crystal clear to everybody involved remote working is going to be very messy. Strongly self-directed people are going to push forward with what they think is the right thing to do (based off of incomplete information), and people less strongly independent are going to be reading a lot of RSS feeds.
I also think there are some common sense arguments against remote working that are actually an argument in favor of it.
Slackers will slack harder when at home. So, if you're at home, who's going to know if you spent your morning watching TV or thinking about a really hard problem? When you're at the office, it's much easier to check up on what you're doing with your time. I think that if you have an employee that you need to check up on what he's doing with his time, you have a problem. The answer is not going to be to put him in an office and get him to learn how to alt-tab very quickly to an IDE when you walk by. You should be working with them to make sure their performance is adequate. If it's not, and you can't seem to find a way around it, fire him. Keeping him around and force-feeding work is a huge waste of time and money. Slackers are going to slack harder at home, use that to your advantage to get rid of people who aren't up to task or don't care anymore quicker.
Communication is more expensive. It is. It also forces people to learn how to communicate better, more concisely, and in a way that's generally documented. While you can easily have calls, in the end you need to email a list or some form of communication that reaches everybody. So there's a short-term cost for a long-term benefit. You may need that short-term benefit right now, in which case you bring people together for a week or two, spend some of that money you've saved on infrastructure, and push things forward.
So, in general I think having remote workers forces a company to have clearer, well-communicated goals, better documentation on decisions, hiring driven and self-directed people makes you think long and hard about your processes and opens up to hiring from a much larger pool of people (all over the world!). I think those are great things to have pressuring you consistently, and will make you a better company for it.
Like everything else, if you have remote workers and pretend they are the same as co-located it's going to fail.
This fixes the “bind: address already in use” errors that were popping up in X.org and upower when running under umockdev, and finally gets us working packages for Ubuntu 12.04 LTS (in the daily-builds PPA).Read more
I just released umockdev 0.2.
The big new feature of this release is support for evdev ioctls. I. e. you can now record what e. g. X.org is doing to touchpads, touch screens, etc.:
$ umockdev-record /dev/input/event15 > /tmp/touchpad.umockdev # umockdev-record -i /tmp/touchpad.ioctl /dev/input/event15 -- Xorg -logfile /dev/null
and load that back into a testbed with X.org using the dummy driver:
cat <<EOF > xorg-dummy.conf Section "Device" Identifier "test" Driver "dummy" EndSection EOF $ umockdev-run -l /tmp/touchpad.umockdev -i /dev/input/event15=/tmp/touchpad.ioctl -- \ Xorg -config xorg-dummy.conf -logfile /tmp/X.log :5
Then e. g.
DISPLAY=:5 xinput will recognize the simulated device. Note that Xvfb won’t work as that does not use udev for device discovery, but only adds the XTest virtual devices and nothing else, so you need to use the real X.org with the dummy driver to run this as a normal user.
This enables easier debugging of new kinds of input devices, as well as writing tests for handling multiple touchscreens/monitors, integration tests of Wacom devices, and so on.
This release now also works with older automakes and Vala 0.16, so that you can use this from Ubuntu 12.04 LTS. The daily PPA now also has packages for that.
Attention: This version does not work any more with recorded ioctl files from version 0.1.
More detailled list of changes:
I just released a new PyGObject for GNOME 3.7.90, with a nice set of bug fixes and some internal code cleanup. Thanks to all contributors!
For a few times now Thomas Bushnell of Google has given a presentation at UDS about Google’s private Ubuntu fork. One of the interesting tidbits he mentions is that deploying a system update that requires rebooting costs the company one million dollars in lost productivity.
This gives us a nice metric to evaluate how much other operations at googleplex might cost. If we assume that a reboot takes five minutes per person, then it follows that taking one minute of all Google staff costs two hundred thousand dollars. Let’s use this piece of information to estimate how much money configure scripts are costing in lost productivity.
The duration of one configure script varies wildly. It is rare to go under 30 seconds and several minutes is not uncommon. Let’s use a round lowball estimate of one minute on average.
That million dollars accounts for every single Google employee. What percentage of those have to build source code with Autotools is unknown, so let’s say half. It is likewise hard to estimate how often these people run configure scripts, but let’s be on the safe side and say once per day. Similarly let’s assume 200 working days in one year.
Crunching all the numbers gives us the final result. That particular organisation is paying $20 million every year just to have their engineers sit still watching text scroll by in a terminal.Read more
umockdev is a set of tools and a library to mock hardware devices for programs that handle Linux hardware devices. It also provides tools to record the properties and behaviour of particular devices, and to run a program or test suite under a test bed with the previously recorded devices loaded.
This allows developers of software like gphoto or libmtp to receive these records in bug reports and recreate the problem on their system without having access to the affected hardware, as well as writing regression tests for those that do not need any particular privileges and thus are capable of running in standard
After working on it for several weeks and lots of rumbling on G+, it’s now useful and documented enough for the first release 0.1!
umockdev consists of the following parts:
umockdev-recordprogram generates text dumps (conventionally called
*.umockdev) of some specified, or all of the system’s devices and their sysfs attributes and udev properties. It can also record ioctls that a particular program sends and receives to/from a device, and store them into a text file (conventionally called
UMockdevTestbedGObject class which builds sysfs and /dev testbeds, provides API to generate devices, attributes, properties, and uevents on the fly, and can load
*.ioctlrecords into them. It provides VAPI and GI bindings, so you can use it from C, Vala, and any programming language that supports introspection. This is the API that you should use for writing regression tests. You can find the API documentation in
docs/referencein the source directory.
umockdev-runprogram builds a sandbox using libumockdev, can load
*.ioctlfiles into it, and run a program in that sandbox. I. e. it is a CLI interface to libumockdev, which is useful in the “debug a failure with a particular device” use case if you get the text dumps from a bug report. This automatically takes care of using the preload library, i. e. you don’t need
umockdev-wrapperwith this. You cannot use this program if you need to simulate uevents or change attributes/properties on the fly; for those you need to use libumockdev directly.
So how do you use umockdev? For the “debug a problem” use case you usually don’t want to write a program that uses libumockdev, but just use the command line tools. Let’s capture some runs from libmtp tools, and replay them in a mock environment:
Bus 001 Device 012: ID 0fce:0166 Sony Ericsson Xperia Mini Pro
$ umockdev-record /dev/bus/usb/001/012 > mobile.umockdev
$ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-detect $ umockdev-record --ioctl mobile.ioctl /dev/bus/usb/001/012 mtp-emptyfolders
/dev/bus/usb/001/012merely echoes what is in
mobile.umockdevand it is independent of what is actually in the real /dev directory. You can rename that device in the generated
*.umockdevfiles and on the command line.
$ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-detect $ umockdev-run --load mobile.umockdev --ioctl /dev/bus/usb/001/012=mobile.ioctl mtp-emptyfolders
If you want to write regression tests, it’s usually more flexible to use the library instead of calling everything through
umockdev-run. As a simple example, let’s pretend we want to write tests for upower.
Batteries, and power supplies in general, are simple devices in the sense that userspace programs such as upower only communicate with them through sysfs and uevents. No /dev nor ioctls are necessary. docs/examples/ has two example programs how to use libumockdev to create a fake battery device, change it to low charge, sending an uevent, and running upower on a local test system D-BUS in the testbed, with watching what happens with
upower --monitor-detail. battery.c shows how to do that with plain GObject in C, battery.py is the equivalent program in Python that uses the GI binding. You can just run the latter like this:
umockdev-wrapper python3 docs/examples/battery.py
and you will see that upowerd (which runs on a temporary local system D-BUS in the test bed) will report a single battery with 75% charge, which gets down to 2.5% a second later.
The gist of it is that you create a test bed with
UMockdevTestbed *testbed = umockdev_testbed_new ();
and add a device with certain sysfs attributes and udev properties with
gchar *sys_bat = umockdev_testbed_add_device ( testbed, "power_supply", "fakeBAT0", NULL, /* attributes */ "type", "Battery", "present", "1", "status", "Discharging", "energy_full", "60000000", "energy_full_design", "80000000", "energy_now", "48000000", "voltage_now", "12000000", NULL, /* properties */ "POWER_SUPPLY_ONLINE", "1", NULL);
You can then e. g. change an attribute and synthesize a “change” uevent with
umockdev_testbed_set_attribute (testbed, sys_bat, "energy_now", "1500000"); umockdev_testbed_uevent (testbed, sys_bat, "change");
With Python or other introspected languages, or in Vala it works the same way, except that it looks a bit leaner due to “proper” object semantics.
The current set of features should already get you quite far for a range of devices. I’d love to get feedback from you if you use this for anything useful, in particular how to improve the API, the command line tools, or the text dump format. I’m not really happy with the split between umockdev (sys/dev) and ioctl files and the relatively complicated CLI syntax of
umockdev-record, so any suggestion is welcome.
One use case that I have for myself is to extend the coverage of ioctls for input devices such as touch screens and wacom tablets, so that we can write some tests for gnome-settings-daemon plugins.
I also want to find a way to pass ioctls back to the test suite/calling program instead of having to handle them all in the preload library, which would make it a lot more flexible. However, due to the nature of the ioctl ABI this is not easy.
The code is hosted on github in the umockdev project; this started out as a systemd branch to add this functionality to libudev, but after a discussion with Kay we decided to keep it separate. But I kept it in git anyway, given how popular it is today. For the bzr lovers, Launchpad has an import at lp:umockdev.
Finally, if you have questions or want to discuss something, you can always find me on IRC (pitti on Freenode or GNOME).
Thanks for your attention and happy testing!Read more
I just released a new PyGObject for GNOME 3.7.5. Unfortunately
master.gnome.org is out of space right now, so I put the new tarball on my Ubuntu people account for the time being.
This again brings a nice set of memory leak and bug fixes, some more reduction of static bindings, and better support for building under Windows.
Thanks to all contributors!
If you haven’t read the original post yet, here’s the quick details: running from 29th to 31st January 2013 we are going to have sessions, mostly on IRC, some on Hangouts-on-Air, where you get a introduction to all kinds of topics surrounding Ubuntu Development. After attending the sessions you will have a good idea how things roughly fit together, how to get started, who to talk to and what’s going on. It’s the perfect opportunity.
Here’s a few quotes from session leaders:
Benjamin Drung and Michael Bienia (of whom the internet does not seem to have any pictures whatsoever) are going to lead the Developers Roundtable and have this to say:
“Do you have questions about Ubuntu development? Here you have the best opportunity to ask everything you want to know, because we will have a number of developers there who can answer your questions for you.”
David Planella, who will talk about “Writing apps for Ubuntu”, says:
“Learn how to use the best open source tools and technologies to write your apps on Ubuntu, both on the desktop and on the phone. You’ll be able to get your first app running in a matter of minutes!”
Michael Hall never gets enough, so he’s giving two sessions at UDW this time around. Here’s what he has to say about Ubuntu App Developer tools: “Ubuntu provides a variety of tools to help you write and manage your applications. This session will cover everything from bootstrapping a new project, to making the final packages installable through the Software Center and everything in between.”
He will also talk about Unity integration: “The Unity desktop provides many opportunities for your application to integrate with the full user experience. Learn how to add your Application to the Unity messaging or sound indicators, add your own indicator, extend the Unity Launcher and much more.”
We’re excited to have Oliver Grawert here, who will talk about Creating Ubuntu images and the Nexus7 images in particular. He will talk about “[t]he Ubuntu image build infrastructure at a glance, what tools do we use, how do they interact and how is the hardware set up for building the official Ubuntu images” and “[h]ow are the nexus7 images different from “normal” Ubuntu images, what can be hacked to make small modifications, how can they be re-packed or supplied with a different root file system“.
Alex Chiang will introduce us to the world of memory leaks and says:
“As we polish and prep Ubuntu for mobile devices, a key activity will be hunting down and squashing memory leaks. This session will discuss the basic theory of leaks, introduce valgrind and our brand new apport-valgrind wrapper, and how to analyze a valgrind log file. A C/C++ background will be helpful to get the most out of this session, but is not strictly required.”
QA mastermind Nicholas “balloons” Skaggs will talk us through “Automated Testing with autopilot” and says:
“Learn about how autopilot is utilized by the unity team and quality team to test the ubuntu desktop. We’ll also provide an overview of what autopilot can do, show and run some example testcases, and give you the knowledge needed to get started writing your own autopilot testcases.”
We are super happy to have brought this line-up of speakers to Ubuntu Developer Week this time around. Head to https://wiki.ubuntu.com/UbuntuDeveloperWeek to review the full schedule, how to join in and find out more.
Share the news with your friends and bring your questions!Read more
The times for Ubuntu have never been more exciting. Cloud, server, desktop, laptop, TV, tablet, phone – everything runs Ubuntu or is soon going to. This makes developing Ubuntu very special, because fixes which go into Ubuntu in one place will benefit all form factors and all circumstances where it’s used. By improving Ubuntu you make millions of people around the globe happy.
During every 6 month release cycle we run Ubuntu Developer Week. It’s back and we’re going to have it from 29th January to 31st January. During the event we will have online sessions where seasoned Ubuntu developers introduce you to their respective area of expertise or to Ubuntu Development in general.
We will have many great sessions, from hands-on introduction to packaging and Ubuntu development to talks about how to quickly get involved in certain teams and interact with other projects. We will talk about tools and infrastructure, fixing bugs, finding memleaks, working with apps, create Ubuntu images and much much much more. This is the best opportunity to get a feel for how Ubuntu development works, get to know people and ask all the questions you might have.
I talked to a few session hosts, read below what they had to say.
Martin Pitt, who will talk about Automated Testing, says: “We have been, and are changing the Ubuntu development process to employ automated testing and avoid introducing regressions, and to improve confidence, focus, and development speed. In the first talk I will give an overview about the various kinds of tests that we do, so that you know where to watch out for failures and get debugging information. The second talk focuses on how to write tests, i.e. which technologies are available for e.g. hardware and GUI related behaviour or system-wide integration checks.”
Stefano Rivera, who will talk about Upstreams and Debian in particular, said: “So, working effectively in Ubuntu means also working with the teams and people upstream who wrote the software we distribute. I’ll talk about why this is important, when it’s necessary, and how to go about it. In particular, our most important upstream is Debian. Debian has a rather unusual (though powerful) bug-tracker. We’ll cover finding, submitting, and modifying bugs on it.”
Chris Wilson, project leader of the Hundred Papercuts Team, says: “Unity may be the shiny new thing that everyone loves, but style without substance is only so much fluff, and the substance of Ubuntu is still its GTK-based apps. Once Hundred Paper Cuts focuses it’s attention on that substance, rubbing out the little annoyances that get under our skin every day we’re using Ubuntu. This session will introduce you to the project, how it works, and how to get involved. If you want to contribute to Ubuntu in a way that has the biggest impact on the quality of experience for the end user, then don’t miss this.”
Bhavani Shankar, said about his talk about patch systems: “Many a time we wonder how to integrate a particular fix a particular part of the code in a program and upload into repositories without having to change code each time by hand and making it clumsy. In this session I’m going to show how to use different patch management systems that are in practice now.”
About his talk about the app review process in Ubuntu he says: “In this session I’m going to explain the present workflow of reviewing apps and give an introduction into the new app dev upload process to automate reviews.”
The forum we use for this is IRC, as it makes it easy to interact for many people without losing track, you can easily copy/paste and we can save the logs as searchable docs afterwards. You join in by simply connecting to #ubuntu-classroom on irc.freenode.net.
Check out the schedule and find more info on the Ubuntu wiki. We hope to see you all there, please let you friends know too.Read more
We have all ran into software inconveniences. These are things that you can basically do but for some reason or another are unintuitive, hard or needlessly complex. When you air your concerns on these issues, sometimes they get fixed. At other times you get back a reply starting with “Well in general you may have a point, but …”.
The rest of the sentence is something along the lines of these examples:
“… you only have to do it once so it’s no big deal.”
“… there are cases where [e.g. autodetection] would not work so having the user [manually do task X] is the only way to be reliable.”
“… I don’t see any problem, in fact I like it the way it is.”
“… replacing [the horrible thing in question] with something better is too much work.”
“… fixing that would change things and change is bad.”
“… that is the established standard way of doing things.”
“… having a human being write that in is good, it means that the input is inspected.”
These are all fine and acceptable reasonings under certain circumstances. In fact, they are great! Let’s see what life would be like in a parallel universe where people had followed them slavishly.
You arrive to your work computer and turn it on. The LILO boot prompt comes up as usually. You type in the partition you want to boot from. This you must do every time because you might have changed partition settings and thus make LILO go out of sync. You type in your boot stanza sure in the knowledge that you get 100% rock solid boot every time.
Except when you have a typo in your boot command but a computer can’t work around that. And that happens only rarely anyways and why would you boot your computer more than once per month?
Once the kernel has loaded, you type in the kernel modules you need to use the machine. You also type in all extra parameters those modules require because some chipsets may work incorrectly sometimes (or so you have been told). So you type in some dozen strings of hexadecimal numbers and really enjoy it in a stockholmesque way.
Finally all the data is put in and the system will boot itself. Then it is time to type in your network settings. In this universe there is no Protocol to Configure network Host settings Dynamically. And why would there be? Any bug in such a system would render the entire network unusable. No, the only way to ensure that things work is to configure network settings by hand every time. Errors in settings cause only one machine to break, not the entire network. Unless you mix gateway/netmask/IP addresses but surely no-one is that stupid? And if they are, it’s their own damn fault! Having things fail spectacularly is GOOD because it shames people into doing the right thing.
After this and a couple of other simple things (each of which you only need to do once, remember) you finally have a working machine. You log on.
Into a text console, naturally. Not all people need X so it should not be started by default. Resources must be used judiciously after all.
But you only need to start X once per session so no biggie. Just like you only need to write in your monitor modeline once per X startup because autodetection might fail and cause HW failure. The modeline can not be stored in a file and used automatically because you might have plugged in a different monitor. Typing it in every time is the only way to be sure. Or would you rather die horribly in a fire caused by incorrect monitor parameters?
After all that is done you can finally fire up an XTerm to start working. But today you feel like increasing the font size a bit. This is about as simple as can get. XTerm stores a list of font sizes it will display in XResources. All you have to do is to edit them, shut down X and start it up again.
Easy as pie. And the best part: you only have to do this once.
Well, once every time you want to add new font sizes. But how often is that, really?
The examples listed above are but a small fraction of reality. If computer users had to do all “one time only” things, they would easily take the entire eight hour work day. The reason they don’t is that some developer Out There has followed this simple rule:
Almost every task that needs to be done “one time only” should, in fact, be done exactly zero times.
I just released a new PyGObject, for GNOME 3.7.4 which is due on Wednesday.
This release saw a lot of bug and memory leak fixes again, as well as enabling some more data types such as
GParamSpec, boxed list properties, or directly setting string members in structs.
Thanks to all contributors!
Summary of changes (see change log for complete details):
Using libraries in C++ is simple. You just do #include<libname> and all necessary definitions appear in your source file.
But what is the cost of this single line of code?
Quite a lot, it turns out. Measuring the effect is straightforward. Gcc has a compiler flag -E, which only runs the preprocessor on the given source file. The cost of an include can be measured by writing a source file that has only one command: #include<filename>. The amount of lines in the resulting file tells how much code the compiler needs to parse in order to use the library.
Here is a table with measurements. They were run on a regular desktop PC with 4 GB of RAM and an SSD disk. The tests were run several times to insure that everything was in cache. The machine was running Ubuntu 12/10 64 bit and the compiler was gcc.
Header LOC Time map 8751 0.02 unordered_map 9728 0.03 vector 9964 0.02 Python.h 11577 0.05 string 15791 0.07 memory 17339 0.04 sigc++/sigc++.h 21900 0.05 boost/regex.h 22285 0.06 iostream 23496 0.06 unity/unity.h 28254 0.14 xapian.h 36023 0.08 algorithm 40628 0.12 gtk/gtk.h 52379 0.26 gtest/gtest.h 53588 0.12 boost/proto/proto.hpp 78000 0.63 gmock/gmock.h 82021 0.18 QtCore/QtCore 82090 0.22 QtWebKit/QtWebKit 95498 0.23 QtGui/QtGui 116006 0.29 boost/python.hpp 132158 3.41 Nux/Nux.h 158429 0.71
It should be noted that the elapsed time is only the amount it takes to run the code through the preprocessor. This is relatively simple compared to parsing the code and generating the corresponding machine instructions. I ran the test with Clang as well and the times were roughly similar.
Even the most common headers such as vector add almost 10k lines of code whenever they are included. This is quite a lot more than most source files that use them. On the other end of the spectrum is stuff like Boost.Python, which takes over three seconds to include. An interesting question is why it is so much slower than Nux, even though it has less code.
This is the main reason why spurious includes need to be eliminated. Simply having include directives causes massive loss of time, even if the features in question are never used. Placing a slow include in a much used header file can cause massive slowdowns. So if you could go ahead and not do that, that would be great.Read more
A long time ago in the dawn of the new millennium lived a man. He was working as a software developer in a medium sized company. His life was pretty good all things considered. For the purposes of this narrative, let us call him Junior Developer.
At his workplace CVS was used for revision control. This was okay, but every now and then problems arose. Because it could not do atomic commits, sometimes two people would check in things at the same time, which broke everything. Sometimes this was immediately apparent and caused people to scramble around to quickly fix the code. At other times the bugs would lay dormant and break things at the worst possible times.
This irritated everyone but since the system mostly worked, this was seen as an unfortunate fact of life. Then Junior Developer found out about a brand new thing called Subversion. It seeemed to be just the thing they needed. It was used in almost exactly the same way, but it was so much better. All commits were atomic, which made mixing commits impossible. One could even rename files, a feature thus far unheard of.
This filled the heart of Junior Developer with joy. With one stroke they could eliminate most of their tooling problems and therefore improve their product’s quality. Overjoyed he waited for the next team meeting where they discussed internal processes.
Once the meeting started, the Junior Developer briefly reminded people of the problems and then explained how Subversion would fix most of the problems that CVS was causing.
At the other end of the table sat a different kind of man, who we shall call the Senior Developer. He was a man of extraordinary skill. He had personally designed and coded many of the systems that the company’s products relied on. Whenever anyone had a difficult technical issue, he would go ask the Senior Developer. His knowledge on his trade had extensive depth and breadth.
Before anyone else had time to comment on the Junior Developer’s suggestion, he grabbed the stage and let out a reply in a stern voice.
- This is not the sort of discussion we should be getting into. CVS is good and we’ll keep using it. Next issue.
The Junior Developer was shocked. He tried to form some sort of a reply but words just refused to come out of his mouth. Why was his suggestion for improvement shot down so fast? Had someone already done a Subversion test he had not heard about? Had he presented his suggestion too brazenly and offended the Senior Developer? What was going on?
These and other questions raced around in his head for the next few days. Eventually he gathered enough willpower and approached the Senior Developer during a coffee break.
- Hey, remember that discussion on Subversion and CVS we had a few days ago? Why did you declare it useless so quickly? Have you maybe tried it and found it lacking?
- I figured you might mention this. Look, I’m sure you are doing your best but the fact of the matter is that CVS is a good tool and it does everything we want it to do.
- But that’s just the thing. It does not do what we want. For example it does not have atomic commits. It would prevent check-in conflicts.
The Senior Programmer lifted his coffee cup to his lips and took a swig. Not to play for time, but simply to give emphasis to his message.
- CVS is a great tool. You just have to be careful when using it and problems like this don’t happen.
The Junior Programmer then tried to explain that even though they were being careful, breakages still happened and they had been for as long as anyone could remember. He tried explaining that in Subversion one could rename files and retain their version control history. He tried to explain how these and other features would be good, how they would allow for developers to spend more of their time on actual code and less on working around features of their tools.
The Senior Developer countered each of these issues with one of two points. Number one was the fact that you could achieve roughly the same with CVS if that is really what was wanted (with an unspoken but very clear implication that it was not). Number two was that functionality such as renames could be achieved by manually editing the repository files. This was considered a major plus, since one would not be able to do this kind of maintenance work on Subversion repositories because they were not plain text files. Any database runs the risk of corruption, which was unacceptable for something as important as source code.
The discussion went on but eventually the Senior Developer had finished his coffee and walked over to the dishwasher to put his cup away.
- Look, I appreciate the thinking you have obviously put into this but let me tell you a little something.
He put his cup away and turned to face the Junior Developer who at this point was majorly frustrated.
- I have used CVS for over 10 years. It is the best system there is. In this time there have been dozens of revision control systems that claim to have provided the same benefits that this Subversion of yours does. I have tried many of them and none have delivered. Some have been worse than the systems CVS replaced if you can believe that. The same will happen with Subversion. The advantages it claims to provide simply are not there, and that’s the sad truth.
The Junior Developer felt like shouting but controlled himself realizing that no good thing could come out of losing his temper. He slouched in his chair. The Senior Developer looked at the clearly disillusioned Junior Developer and realized the argument had ended in his favor. He left back to his office.
At the coffee room door he turned around and said his final words to the Junior Developer.
- Besides, even if the atomic commits you seem to hold so dear would work, the improvements they provide would be minimal. They are a nice-to-have toy and will never account for anything more than that.
This story is not true. But it could be.
Variants of this discussion are being held every single day in software development companies and communities around the world.Read more
I just released a new PyGObject, for GNOME 3.7.3 which is due on Wednesday.
This is mostly a bug fix release. There is one API addition, it brings back official support for calling
GLib.io_add_watch() with a Python file object or fd as first argument, in addition to the official API which expects a
GLib.IOChannel object. These modes were marked as deprecated in 3.7.2 (only).
Thanks to all contributors!
Summary of changes (see change log for complete details):
When writing system integration tests it often happens that I want to mount some tmpfses over directories like /etc/postgresql/ or /home, and run the whole script with an unshared mount namespace so that (1) it does not interfere with the real system, and (2) is guaranteed to clean up after itself (unmounting etc.) after it ends in any possible way (including SIGKILL, which breaks usual cleanup methods like “trap”, “finally”, “def tearDown()”, “atexit()” and so on).
In gvfs’ and postgresql-common’s tests, which both have been around for a while, I prepare a set of shell commands in a variable and pipe that into
unshare -m sh, but that has some major problems: It doesn’t scale well to large programs, looks rather ugly, breaks syntax highlighting in editors, and it destroys the real stdin, so you cannot e. g. call a “bash -i” in your test for interactively debugging a failed test.
I just changed postgresql-common’s test runner to use unshare/tmpfses as well, and needed a better approach. What I eventually figured out preserves stdin, $0, and $@, and still looks like a normal script (i. e. not just a single big string). It still looks a bit hackish, but I can live with that:
#!/bin/sh set -e # call ourselves through unshare in a way that keeps normal stdin, $0, and CLI args unshare -uim sh -- -c "`tail -n +7 $0`" "$0" "$@" exit $? # unshared program starts here set -e echo "args: $@" echo "mounting tmpfs" mount -n -t tmpfs tmpfs /etc grep /etc /proc/mounts echo "done"
As Unix/Linux’ shebang parsing is rather limited, I didn’t find a way to do something like
#!/usr/bin/env unshare -m sh
If anyone knows a trick which avoids the “tail -n +7″ hack and having to pay attention to passing around “$@”, I’d appreciate a comment how to simplify this.Read more
Software quality has received a lot of attention recently. There have been tons of books, blog posts, conferences and the like on improving quality. Tools and practices such as TDD, automatic builds, agile methods, pair programming and static code analysers are praised for improving code quality.
And, indeed, that is what they have done.
But one should never mix the tool with the person using it. All these wonderful tools are just that: tools. They are not the source of quality, only facilitators of it. The true essence of quality does not flow from them. It comes from somewhere else entirely. When distilled down to its core, there is only one source of true quality.
The only way to get consistently high quality code is that the people who generate it care about it. This means that they have a personal interest in their code tree. They want it to succeed and flourish. In the best case they are even proud of it. This is the foundation all quality tools lie on.
If caring does not exist, even the best of tools can not help. This is due to the fact that human beings are very, very good at avoiding work they don’t want to do. As an example, let’s look at code review. A caring person will review code to the best of their abilities because he wants the end result be the best it can be. A non-caring one will shrug, think “yeah, sure, fine, whatever” and push the accept button, because it’s less work for him and he knows that his merge requests will go in easier if there is a general (though unspoken) consensus of doing things half-assed.
Unfortunately caring is not something you can buy, it is something you must birth. Free food and other services provided by companies such as Valve and Google can be seen as one way of achieving quality. If a company sincerely cares about its employees, they will in return care about the quality of their work.
All that said, here is my proposal for a coder’s mascot:Read more
The main new feature is supporting foreign architectures in apport-retrace. If apport-retrace works in sandbox mode and works on a crash that was not produced on the same architecture as apport-retrace is running on, it will now build a sandbox for the report’s architecture and invoke gdb with the necessary magic options to produce a proper stack trace (and the other gdb information).
Right now this works for i386, x86_64, and ARMv7, but if someone is interested in making this work for other architectures, please ping me.
This is rolled out to the Launchpad retracers, see for example Bug #1088428. So from now on you can report your armhf crashes to Launchpad and they ought to be processed. Note that I did a mass-cleanup of old armhf crash bugs this morning, as the existing ones were way too old to be retraced.
For those who are running their own retracers for their project: You need to add an armhf specific apt sources list your per-release configuration directory, e. g.
Ubuntu 12.04/armhf/sources.list as armhf is on ports.ubuntu.com instead of archive.ubuntu.com. Also, you need to add an armhf crash database to your
crashdb.conf and add a cron job for the new architecture. You can see how all this looks like in the configuration files for the Launchpad retracers.
The other improvement concerns package hooks. So far, when a package hook crashed the exception was only printed to stderr, where most people would never see them when using the GTK or KDE frontend. With 2.7 these exceptions are also added to the report itself (
HookError_filename), so that they appear in the bug reports.
The release also fixes a couple of bugs, see the release notes for details.Read more
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.