Canonical Voices

Posts tagged with 'development'

Jussi Pakkanen

One of the main ways of reducing code complexity (and thus compile times) in C/C++ is forward declaration. The most basic form of it is this:

class Foo;

This tells the compiler that there will be a class called Foo but it does not specify it in more detail. With this declaration you can’t deal with Foo objects themselves but you can form pointers and references to them.

Typically you would use forward declarations in this manner.

class Bar;

class Foo {
  void something();
  void method1(Bar *b);
  void method2(Bar &b);
};

Correspondingly if you want to pass the objects themselves, you would typically do something like this.

#include"Bar.h"

class Foo {
  void something();
  void method1(Bar b);
  Bar method2();
};

This makes sense because you need to know the binary layout of Bar in order to pass it properly to and from a method. Thus a forward declaration is not enough, you must include the full header, otherwise you can’t use the methods of Foo.

But what if some class does not use either of the methods that deal with Bars? What if it only calls method something? It would still need to parse all of Bar (and everything it #includes) even though it never uses Bar objects. This seems inefficient.

It turns out that including Bar.h is not necessary, and you can instead do this:

class Bar;

class Foo {
  void something();
  void method1(Bar b);
  Bar method2();
};

You can define functions taking or returning full objects with forward declarations just fine. The catch is that those users of Foo that use the Bar methods need to include Bar.h themselves. Correspondingly those that do not deal with Bar objects themselves do not need to include Bar.hh ever, even indirectly. If you ever find out that they do, it is proof that your #includes are not minimal. Fixing these include chains will make your source files more isolated and decrease compile times, sometimes dramatically.

You only need to #include the full definition of Bar if you need:

  • to use its services (constructors, methods, constants, etc)
  • to know its memory layout

In practice the latter means that you need to either call or implement a function that takes a Bar object rather than a pointer or reference to it.

For other uses a forward declaration is sufficient.

Post scriptum

The discussion above holds even if Foo and Bar are templates, but making template classes as clean can be a lot harder and may in some instances be impossible. You should still try to minimize header includes as much as possible.

Read more
Anthony Dillon

I was recently asked to attend a cloud sprint in San Francisco as a front-end developer for the new Juju GUI product. I had the pleasure of finally meeting the guys that I have collaboratively worked with and ultimately been helped by on the project.

Here is a collection of things I learnt during my week overseas.

Mocha testing

Mocha is a JavaScript test framework that tests asynchronously in a browser. Previously I found it difficult to imagine a use case when developing a site, but I now know that any interactive element of a site could benefit from Mocha testing.

This is by no means a full tutorial or features set of Mocha but my findings from a week with the UI engineering team.

Breakdown small elements of your app or website its logic test

If you take a system like a user’s login and register, it is much easier to test each function of the system. For example, if the user hits the signup button you should test the registration form is then visible to the user. Then work methodically through each step of the process, testing as many different inputs you can think of.

Saving your bacon

Testing undoubtedly slows down initial development but catches a lot of mistakes and flaws in the system before anything lands in the main code base. It also means if a test fails you don’t have to manually check each test again by hand — you simply run the test suite and see the ticks roll in.

Speeds up bug squashing

Bug fixing becomes easier to the reporter and the developer. If the reporter submits a test that fails due to a bug, the developer will get the full scope of the issue and once the test passes the developer and reporter can be confident the problem no longer exists.

Linting

While I have read a lot about linting in the past but have not needed to use it on any projects I have worked on to date. So I was very happy to use and be taught the linting performed by the UI engineering team.

Enforces a standard coding syntax

I was very impressed with the level of code standards it enforces. It requires all code to be written in a certain way, from indenting and commenting to unused variables. This results in anyone using the code, being able to pick up it up and read it as if created by one person when in fact it may have contributed by many.

Code reviews

In my opinion code reviews should be performed on all front-end work to discourage sloppy code and encourage shared knowledge.

Mark up

Mark up should be very semantic. This can be a case of opinion, but shared discussion will get the team to an agreed solution, which will then be reused again by others in the similar situations.

CSS

CSS can be difficult as there are different ways to achieve a similar result, but with a code review the style used will be common practise within the team.

JavaScript

A perfect candidate as different people have different methods of coding. With a review, it will catch any sloppy or short cuts in the code. A review makes sure  your code is refactored to best-practise the first time.

Conclusion

Test driven development (TDD) does slow the development process down but enforces better output from your time spend on the code and less bugs in the future.

If someone writes a failing test for your code which is expected to pass, working on the code to produce a passing test is a much easier way to demonstrate the code now works, along with all the other test for that function.

I truly believe in code reviews now. Previously I was sceptical about them. I used to think that  “because my code is working” I didn’t need reviews and it would slow me down. But a good reviewer will catch things like “it works but didn’t you take a shortcut two classes ago which you meant to go back and refactor”. We all want our code to be perfect and to learn from others on a daily basis. That is what code reviews give us.

Read more
Inayaili de León Persson

Release month is always a busy one for the web team, and this time was no exception with the Ubuntu 13.10 release last week.

In the last few weeks we’ve worked on:

  • Ubuntu 13.10 release: we’ve updated www.ubuntu.com for the latest Ubuntu release
  • Updates to the new Ubuntu OpenStack cloud section: based on some really interesting feedback we got from Tingting’s research, we’ve updated the new pages to make them easier to understand
  • Canonical website: Carla has conducted several workshops and interviews with stakeholders and has defined key audiences and user journeys
  • Juju GUI: on-boarding is now ready to land in Juju soon
  • Fenchurch (our CMS): the demo services are fixed and our publishing speed has seen a 90% improvement!

And we’re currently working on:

  • Responsive mobile pilot: we’ve been squashing the most annoying bugs and it’s now almost ready for the public alpha release!
  • Canonical.com: with some of the research for the project already completed, Carla will now be working on creating the site’s information architecture and wireframing its key sections
  • Juju GUI: Alejandra, Luca, Spencer, Peter and Anthony are in a week-long sprint in San Francisco for some intense Juju-related work (lucky them!)
  • developer.ubuntu.com: we have been working with the Community team to update the site’s design to be more in line with www.ubuntu.com and the first iteration will be going live soon
  • Fenchurch: we are now working on a new download service

Release day at the Canonical office in LondonRelease day at the Canonical office

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Add your thoughts in the comments.

Read more
Jussi Pakkanen

With the release of C++11 something quite extraordinary has happened. Its focus on usable libraries, value types and other niceties has turned C++, conceptually, into a scripting language.

This seems like a weird statement to make, so let’s define exactly what we mean by that. Scripting languages differ from classical compiled languages such as C in the following ways:

  • no need to manually manage memory
  • expressive syntax, complex functionality can be implemented in just a couple of lines of code
  • powerful string manipulation functions
  • large standard library

As of C++11 all these hold true for C++. Let’s examine this with a simple example. Suppose we want to write a program that reads all lines from a file and writes them in a different file in sorted order. This is classical scripting language territory. In C++11 this code would look something like the following (ignoring error cases such as missing input arguments).

#include<string>
#include<vector>
#include<algorithm>
#include<fstream>

using namespace std;

int main(int argc, char **argv) {
  ifstream ifile(argv[1]);
  ofstream ofile(argv[2]);
  string line;
  vector<string> data;
  while(getline(ifile, line)) {
    data.push_back(line);
  }
  sort(data.begin(), data.end());
  for(const auto &i : data) {
    ofile << i << std::endl;
  }
  return 0;
}

That is some tightly packed code. Ignoring include boilerplate and the like leaves us with roughly ten lines of code. If you were to do this with plain C using only its standard library merely implementing getline functionality reliably would take more lines of code. Not to mention it would be tricky to get right.

Other benefits include:

  • every single line of code is clear, understandable and expressive
  • memory leaks can not happen, could be reworked into a library function easily
  • smaller memory footprint due to not needing a VM
  • compile time with -O3 is roughly the same as Python VM startup and has to be done only once
  • faster than any non-JITted scripting language

Now, obviously, this won’t mean that scripting languages will disappear any time soon (you can have my Python when you pry it from my cold, dead hands). What it does do is indicate that C++ is quite usable in fields one traditionally has not expected it to be.

Read more
Inayaili de León Persson

We might have been quiet, but we have been busy! Here’s a quick overview of what the web team has been up to recently.

In the past month we’ve worked on:

  • New juju.ubuntu.com website: we’ve revamped the information architecture, revisited the key journeys and updated the look to be more in line with www.ubuntu.com
  • Fenchurch (our CMS): we’ve worked on speeding up deployment and continuous testing
  • New Ubuntu OpenStack cloud section on www.ubuntu.com/cloud: we’ve launched a restructured cloud section, with links to more resources, clearer journeys and updated design
  • Juju GUI: we’ve launched the brand new service inspector

And we’re currently working on:

  • 13.10 release updates: the new Ubuntu release is upon us, and we’re getting the website ready to show it off
  • A completely new project that will be our mobile/responsive pilot: we’re updating our web patterns to a more future-friendly shape, investigating solutions to handle responsive images, and we’ve set up a (growing) mobile device testing suite — watch this space for more on this project
  • Fenchurch: we’re improving our internal demo servers and enhancing performance on the downloads page to help deal with release days!
  • Usability testing of the new cloud section: following the aforementioned launch, Tingting is helping us test these pages with their target audience — and we’ve already found loads of things we can improve!
  • A new canonical.com: we haven’t worked on Canonical’s main website in a while, so we’re looking into making it leaner and meaner. As a first stage, Carla has been conducting internal interviews and analysing the existing content
  • Juju GUI: we’re designing on-boarding and a new notification system, and we’re finalising designs for the masthead, service block and relationship lines

We’ve also learnt that Spencer’s favourite author is Paul Auster. And Tristram wrote a post on his blog about his first experience with Juju.

Web team weekly meeting on 19 September 2013Spencer giving his 5×5 presentation at last week’s web team meeting

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Please let us know your thoughts in the comments.

Read more
Jussi Pakkanen

The problem

Suppose you have a machine with 8 cores. Also suppose you have the following source packages that you want to compile from scratch.

eog_3.8.2.orig.tar.xz
grilo_0.2.6.orig.tar.xz
libxml++2.6_2.36.0.orig.tar.xz
Python-3.3.2.tar.bz2
glib-2.36.4.tar.xz
libjpeg-turbo_1.3.0.orig.tar.gz
wget_1.14.orig.tar.gz
grail-3.1.0.tar.bz2
libsoup2.4_2.42.2.orig.tar.xz

You want to achieve this as fast as possible. How would you do it?

Think carefully before proceeding.

The solution

Most of you probably came up with the basic idea of compiling one after the other with ‘make -j 8′ or equivalent. There are several reasons to do this, the main one being that this saturates the CPU.

The other choice would be to start the compilation on all subdirs at the same time but with ‘make -j 1′. You could also run two parallel build jobs with ‘-j 4′ or four with ‘-j 2′.

But surely that would be pointless. Doing one thing at the time maximises data locality so the different build trees don’t have to compete with each other for cache.

Right?

Well, let’s measure what actually happens.

timez

The first bar shows the time when running with ‘-j 8′. It is slower than all other combinations. In fact it is over 40% (one minute) slower than the fastest one, although all alternatives are roughly as fast.

Why is this?

In addition to compilation and linking processes, there are parts in the build that can not be parallelised. There are two main things in this case. Can you guess what they are?

What all of these projects had in common is that they are built with Autotools. The configure step takes a very long time and can’t be parallelised with -j. When building consecutively, even with perfect parallelisation, the build time can never drop below the sum of configure script run times. This is easily half a minute each on any non-trivial project even on the fastest i7 machine that money can buy.

The second thing is time that is lost inside Make. Its data model makes it very hard to optimize. See all the gory details here.

The end result of all this is a hidden productivity sink, a minute lost here, one there and a third one over there. Sneakily. In secret. In a way people have come to expect.

These are the worst kinds of productivity losses because people honestly believe that this is just the way things are, have always been and shall be evermore. That is what their intuition and experience tells them.

The funny thing about intuition is that it lies to you. Big time. Again and again.

The only way out is measurements.

 

Read more
Jussi Pakkanen

We all like C++’s container classes such as maps. The main negative thing about them is persistance. Ending your process makes the data structure go away. If you want to store it, you need to write code to serialise it to disk and then deserialise it back to memory again when you need it. This is tedious work that has to be done over and over again.

It would be great if you could command STL containers to write their data to disk instead of memory. The reductions in application startup time alone would be welcomed by all. In addition most uses for small embedded databases such as SQLite would go away if you could just read stuff from persistent std::maps.

The standard does not provide for this because serialisation is a hard problem. But it turns out this is, in fact, possible to do today. The only tools you need are the standard library and basic standards conforming C++.

Before we get to the details, please note this warning from the society of responsible coding.

evil

What follows is the single most evil piece of code I have ever written. Do not use it unless you understand the myriad of ways it can fail (and possibly not even then).

The basic problem is that C++ containers work only with memory but serialisation requires writing bytes to disk. The tried and true solution for this problem is memory mapped files. It is a technique where a certain portion of process’ memory is mapped to a backing file. Any changes to the memory layout will be written to the disk by the kernel. This gives us memory serialisation.

This is only half of the problem, though. STL containers and others allocate the memory they need through operator new. The way new works is implementation defined. It may give out addresses that are scattered around the memory space. We can’t mmap the entire address space because it would take too much space and serialise lots of stuff we don’t care about.

Fortunately C++ allows you to specify custom allocators for containers. An allocator is an object that does memory allocations for the object it is tied to. This indirection allows us to write our own allocator that gives out raw memory chunks from the mmapped memory area.

But there is still a problem. Since pointers refer to absolute memory locations we would need to have the mmapped memory area in the same location in every process that wants to use it. It turns out that you can enforce the address at which the memory mapping is to be done. This gives us an outline on how to achieve our goal.

  • create an empty file for backing (10 MB in this example)
  • mmap it in place
  • populate the data structure with objects allocated in the mmapped area
  • close creator program
  • start reader program, mmap the data and cast the root object into existance

And that’s it. Here’s how it looks in code. First some declarations:

*mmap_start = (void*)139731133333504;
size_t offset = 1024;

template <typename T>
class MmapAlloc {
  ....
  pointer allocate(size_t num, const void *hint = 0) {
    long returnvalue = (long)mmap_start + offset;
    size_t increment = num * sizeof(T) + 8;
    increment -= increment % 8;
    offset += increment;
    return (pointer)returnvalue;
  }
  ...
};

typedef std::basic_string<char, std::char_traits<char>,
  MmapAlloc<char>> mmapstring;
typedef std::map<mmapstring, mmapstring, std::less<mmapstring>,
  MmapAlloc<mmapstring> > mmapmap;

First we declare the absolute memory address of the mmapping (it can be anything as long as it won’t overlap an existing allocation). The allocator itself is extremely simple, it just hands out memory offset bytes in the mapping and increments offset by the amount of bytes allocated (plus alignment). Deallocated memory is never actually freed, it remains unused (destructors are called, though). Last we have typedefs for our mmap backed containers.

Population of the data sets can be done like this.

int main(int argc, char **argv) {
    int fd = open("backingstore.dat", O_RDWR);
    void *mapping;
    mapping = mmap(mmap_start, 10*1024*1024,
      PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0);
    if(mapping == MAP_FAILED) {
        printf("MMap failed.\n");
        return 1;
    }
    mmapstring key("key");
    mmapstring value("value");
    if(fd < 1) {
        printf("Open failed.\n");
        return 1;
    }
    auto map = new(mapping)mmapmap();
    (*map)[key] = value;
    printf("Sizeof map: %ld.\n", (long)map->size());
    printf("Value of 'key': %s\n", (*map)[key].c_str());
    return 0;
}

We construct the root object at the beginning of the mmap and then insert one key/value pair. The output of this application is what one would expect.

Sizeof map: 1.
Value of 'key': value

Now we can use the persisted data structure in another application.

int main(int argc, char **argv) {
    int fd = open("backingstore.dat", O_RDONLY);
    void *mapping;
    mapping = mmap(mmap_start, 10*1024*1024, PROT_READ,
     MAP_SHARED | MAP_FIXED, fd, 0);
    if(mapping == MAP_FAILED) {
        printf("MMap failed.\n");
        return 1;
    }
    std::string key("key");
    auto *map = reinterpret_cast<std::map<std::string,
                                 std::string> *>(mapping);
    printf("Sizeof map: %ld.\n", (long)map->size());
    printf("Value of 'key': %s\n", (*map)[key].c_str());
    return 0;
}

Note in particular how we can specify the type as std::map<std::string, std::string> rather than the custom allocator version in the creator application. The output is this.

Sizeof map: 1.
Value of 'key': value

It may seem a bit anticlimactic, but what it does is quite powerful.

Extra evil bonus points

If this is not evil enough for you, just think about what other things can be achieved with this technique. As an example you can have the backing file mapped to multiple processes at the same time, in which case they all see changes live. This allows you to have things such as standard containers that are shared among processes.

Read more
David Murphy (schwuk)

I was browsing Twitter last night when Thoughbot linked to their post about commit messages.

This was quite timely as my team has been thinking about improving the process of creating our release notes, and it has been proposed that we generate them automatically from our commit messages. This in turn requires that we have commit messages of sufficient quality, which – to be honest – we don’t always. So the second proposal is to enforce “good” commit messages as part of reviewing and approving merge proposals into our projects. See this post from Kevin on my team for an overview of our branching strategies to get an idea of how our projects are structured.

We still need to define what constitutes a “good” message, but we will certainly use both the article from Thoughtbot and the oft-referenced advice from Tim Pope as our basis. We are also only planning to apply this to commits to trunk because, well, you don’t need a novel – or even a short story – for every commit in your spike branch!

Now, back to the Thoughtbot article, and this piece of advice stood out for me:

Never use the -m <msg> / --message=<msg> flag to git commit.

Since I first discovered -m I have used it almost exclusively, thinking I’m being so clever and efficient, but in reality I’ve been restricting what I could say to what felt “right” on an 80 character terminal. If nothing else, I will be trying to avoid the use of -m from now on.

Read more
Jussi Pakkanen

Some C++ code bases seem to compile much more slowly than others. It is hard to compare them directly because they very often have different sizes. Thus it is hard to encourage people to work on speed because there are no hard numbers to back up your claims.

To get around this I wrote a very simple compile time measurer. The code is available here. The basic idea is quite simple: provide a compiler wrapper that measures the duration of each compiler invocation and the amount of lines (including comments, empty lines etc) the source file had. Usage is quite simple. First you configure your code base.

CC='/path/to/smcc.py gcc' CXX='/path/to/smcc.py g++' configure_command

Then you compile it.

SMCC_FILE=/path/to/somewhere/sm_times.txt compile_command

Finally you run the analyzer script on the result file.

sm-analyze.py /path/to/sm_times.txt

The end result is the average amount of lines compiled per second as well as per-file compile speed sorted from slowest to fastest.

I ran this on a couple of code bases and here are the results. The test machine was a i7 with 16GB of ram using eight parallel compile processes. Unoptimized debug configuration was always chosen.

                   avg   worst     best
Libcolumbus     287.79   48.77  2015.60
Mediascanner     52.93    5.64   325.55
Mir             163.72   10.06 17062.36
Lucene++         65.53    7.57   874.88
Unity            45.76    1.86  1016.51
Clang           238.31    1.51 20177.09
Chromium        244.60    1.28 49037.79

For comparison I also measured a plain C code base.

                   avg   worst     best
GLib           4084.86  101.82 19900.18

We can see that C++ compiles quite a lot slower than plain C. The main interesting thing is that C++ compilation speed can change an order of magnitude between projects. The fastest is libcolumbus, which has been designed from the ground up to be fast to compile.

What we can deduce from this experiment is that C++ compilation speed is a feature of the code base, not so much of the language or compiler. It also means that if your code base is a slow one, it is possible to make it compile up to 10 times faster without any external help. The tools to do it are simple: minimizing interdependencies and external deps. This is one of those things that is easy to do when starting anew but hard to retrofit to code bases that resemble a bowl of ramen. The payoff, however, is undeniable.

Read more
Jussi Pakkanen

Pimpl is a common idiom in C++. It means hiding the implementation details of a class with a construct that looks like this:

class pimpl;

class Thing {
private:
  pimpl *p:
public:
 ...
};

This cuts down on compilation time because you don’t have to #include all headers required for the implementation of this class. The downside is that p needs to be dynamically allocated in the constructor, which means a call to new. For often constructed objects this can be slow and lead to memory fragmentation.

Getting rid of the allocation

It turns out that you can get rid of the dynamic allocation with a little trickery. The basic approach is to preserve space in the parent object with, say, a char array. We can then construct the pimpl object there with placement new and delete it by calling the destructor.

A header file for this kind of a class looks something like this:

#ifndef PIMPLDEMO_H
#define PIMPLDEMO_H

#define IMPL_SIZE 24

class PimplDemo {
private:
  char data[IMPL_SIZE];

 public:
  PimplDemo();
  ~PimplDemo();

  int getNumber() const;
};

#endif

IMPL_SIZE is the size of the pimpl object. It needs to be manually determined. Note that the size may be different on different platforms.

The corresponding implementation looks like this.

#include"pimpldemo.h"
#include<vector>

using namespace std;

class priv {
public:
  vector<int> foo;
};

#define P_DEF priv *p = reinterpret_cast<priv*>(data)
#define P_CONST_DEF const priv *p = reinterpret_cast<const priv*>(data)

PimplDemo::PimplDemo() {
  static_assert(sizeof(priv) == sizeof(data), "Pimpl array has wrong size.");
  P_DEF;
  new(p) priv;
  p->foo.push_back(42); // Just for show.
}

PimplDemo::~PimplDemo() {
  P_DEF;
  p->~priv();
}

int PimplDemo::getNumber() const {
  P_CONST_DEF;
  return (int)p->foo.size();
}

Here we define two macros that create a variable for accessing the pimpl. At this point we can use it just as if were defined in the traditional way. Note the static assert that checks, at compile time, that the space we have reserved for the pimpl is the same as what the pimpl actually requires.

We can test that it works with a sample application.

#include<cstdio>
#include<vector>
#include"pimpldemo.h"

int main(int argc, char **argv) {
  PimplDemo p;
  printf("Should be 1: %d\n", p.getNumber());
  return 0;
}

The output is 1 as we would expect. The program is also Valgrind clean so it works just the way we want it to.

When should I use this technique?

Never!

Well, ok, never is probably a bit too strong. However this technique should be used very sparingly. Most of the time the new call is insignificant. The downside of this approach is that it adds complexity to the code. You also have to keep the backing array size up to date as you change the contents of the pimpl.

You should only use this approach if you have an object in the hot path of your application and you really need to squeeze the last bit of efficiency out of your code. As a rough guide only about 1 of every 100 classes should ever need this. And do remember to measure the difference before and after. If there is no noticeable improvement, don’t do it.

Read more
Jussi Pakkanen

The quest for software quality has given us lots of new tools: new compiler warnings, making the compiler treat all warnings as errors, style checkers, static analyzers and the like. These are all good things to have. Sooner or later in a project someone will decide to make them mandatory. This is fine as well, and the reason we have continuous integration servers.

Still, some people might, at times, propose MRs that cause CI to emit errors. The issues are fixed, some time is lost when doing this but on the whole it is no big deal. But then someone comes to the conclusion that these checks should be mandatory on every build so the errors never get to the CI server. Having all builds pristine all the time is great, the reasoning goes, because then errors are found as soon as possible. This is as per the agile manifesto and universally a good thing to have.

Except that it is not. It is terrible! It is a massive drain on productivity and the kind of thing that makes people hate their job and all things related to it.

This is a strong and somewhat counter-intuitive statement. Let’s explore it with an example. Suppose we have this simple snippet of code.

  x = this_very_long_function_that_does_something(foo, bar, baz10, foofoofoo);

Now let’s suppose we have a bug somewhere. As part of the debugging cycle we would like to check what would happen if x had the value 3 instead of whatever value the function returns. The simple way to check is to change the code like this.

  x = this_very_long_function_that_does_something(foo, bar, baz10, foofoofoo);
  x = 3;

This does not give you the result you want. Instead you get a compile/style/whatever checker error. Why? Because you assign to variable x twice without using the first value for anything. This is called a dead assignment. It may cause latent bugs, so the checker issues an error halting the build.

Fair enough, let’s do this then.

  this_very_long_function_that_does_something(foo, bar, baz10, foofoofoo);
  x = 3;

This won’t work either. The code is ignoring the return value of the function, which is an error (in certain circumstances but not others).

Grumble, grumble. On to iteration 3.

  //x = this_very_long_function_that_does_something(foo, bar, baz10, foofoofoo);
  x = 3;

This will also fail. The line with the comment is over 80 characters wide and this is not tolerated by many style guides, presumably because said code bases are being worked on by people who only have access to text consoles. On to attempt 4.

//x = this_very_long_function_that_does_something(foo, bar, baz10, foofoofoo);
  x = 3;

This won’t work either for two different reasons. Case 1: the variables used as arguments might not be used at all and will therefore trigger unused variable warnings. Case 2: if any argument is passed by reference or pointer, their state might not be properly updated.

The latter case is the worst, because no static checker can detect it. Requiring the code to conform to a cosmetic requirement caused it to grow an actual bug.

Getting this kind of test code through all the testers is a lot of work. Sometimes more than the actual debug work. Most importantly, it is completely useless work. Making this kind of exploratory code fully polished is useless because it will never, ever enter any kind of production. If it does, your process has bigger issues than code style. Any time spent working around a style checker is demotivational wasted effort.

But wait, it gets worse!

Type checkers are usually slow. They can take over ten times longer to do their checks than just plain compiling the source code. Which means that developer productivity with these tools is, correspondingly, several times lower than it could be. Programmers are expensive, having them sitting around watching preprocessor checker text scroll slowly by in a terminal is not a very good use of their time.

Fortunately there is a correct solution for this. Make the style checks a part of the unit test suite, possibly as part of an optional suite of slow tests. Run said tests on every CI merge. This allows developers to be fast and productive most of the time but precise and polished when required.

Read more
David Murphy (schwuk)

As part of our self-improvement and knowledge sharing within Canonical, within our group (Professional and Engineering Services) we regularly – at least once a month – run what we call an “InfoSession”. Basically it is Google Hangout on Air with a single presenter on a topic that is of interest/relevance to others, and one of my responsibilities is organising them. Previously we have had sessions on:

  • Go (a couple of sessions in fact)
  • SystemTap
  • Localization (l10n) and internationalization (i18n)
  • Juju
  • Graphviz
  • …and many others…

Today the session was on continuous integration with Tarmac and Vagrant, presented by Daniel Manrique from our certification team. In his own words:

Merge requests and code reviews are a fact of life in Canonical. Most projects start by manually merging approved requests, including running a test suite prior to merging.

This infosession will talk about tools that automate this workflow (Tarmac), while leveraging your project’s test suite to ensure quality, and virtual machines (using Vagrant) to provide multi-release, repeatable testing.

Like most of our sessions it is publicly available, here it is is for your viewing pleasure:

Read more
pitti

umockdev 0.3 introduced the notion of an “umockdev script”, i. e. recording the read()s and write()s that happen on a device node such as ttyUSB0. With that one can successfully run ModemManager in an umockdev testbed to pretend that one has e. g. an USB 3G stick.

However, this didn’t yet apply to the Ubuntu phone stack, where ofonod talks to Android’s “rild” (Radio Interface Layer Daemon) through the Unix socket /dev/socket/rild. Thus over the last days I worked on extending umockdev’s script recording and replaying to Unix sockets as well (which behave quite different and quite a bit more complex than ordinary files and character devices). This is released in 0.4, however you should actually get 0.4.1 if you want to package it.

So you now can make a script from ofonod how it makes a phone call (or other telephony action) through rild, and later replay that in an umockdev testbed without having to have a SIM card, or even a phone. This should help with reproducing and testing bugs like ofonod goes crazy when roaming: It’s enough to record the communication for a person who is in a situation to reproduce the bug, then a developer can study what’s going wrong independent of harware and mobile networks.

How does it work? If you have used umockdev before, the pattern should be clear now: Start ofonod under umockdev-record and tell it to record the communication on /dev/socket/rild:

  sudo pkill ofonod; sudo umockdev-record -s /dev/socket/rild=phonecall.script -- ofonod -n -d

Now launch the phone app and make a call, send a SMS, or anything else you want to replay later. Press Control-C when you are done. After that you can run ofonod in a testbed with the mocked rild:

  sudo pkill ofonod; sudo umockdev-run -u /dev/socket/rild=phonecall.script -- ofonod -n -d

Note the new --unix-stream/-u option which will create /tmp/umockdev.XXXXXX/dev/socket/rild, attach some server threads to accept client connections, and replay the script on each connection.

But wait, that fails with some

   ERROR **: ScriptRunner op_write[/dev/socket/rild]: data mismatch; got block '...', expected block '...'

error! Apparently ofono’s messages are not 100% predictable/reproducible, I guess there are some time stamps or bits of uninitialized memory involved. Normally umockdev requires that the program under test sticks to the previously recorded write() parts of the script, to ensure that the echoed read()s stay in sync and everything works as expected. But for cases like these were some fuzz is expected, umockdev 0.4 introduces setting a “fuzz percentage” in scripts. To allow 5% byte value mismatches, i. e. in a block of n bytes there can be n*0.05 bytes which are different than the script, you’d put a line

  f 5 -

before the ‘w’ block that will get jitter, or just put it at the top of the file to allow it for all messages. Please see the script format documentation for details.

After doing that, ofonod works, and you can do the exact same operations that you recorded, with e. g. the phone app. Doing other operations will fail, of course.

As always, umockdev-run -u is of course just a CLI convenience wrapper around the umockdev API. If you want to do the replay in a C test suite, you can call

   umockdev_testbed_load_socket_script(testbed, "/dev/socket/rild",
                                       SOCK_STREAM, "path/to/phonecall.script", &error);

or the equivalent in Python or Vala, as usual.

If you are an Ubuntu phone developer and want to use this, please don’t hesitate to talk to me. This is all in saucy now, so on the Ubuntu phone it’s a mere “sudo apt-get install umockdev” away.

Read more
Chee Wong

Right… so where should we start? First post.

Hello, my name is Chee, and I am an industrial designer.

In this post I will share some materials, stories and process during the development of the Ubuntu Edge.

 

D001

We started off by pulling the key elements of the Suru theme, and expanded on that, in order to explore the transition from a digital user experience, to a physical one.

 

0002

0005

0003

Once the rough ideas were formed, the fun part started, as we dived right into visualising the concepts; Pencils, sketching pads, markers, clippings, samples, colour chips and anything else interesting.

 

D003One of the best way to visualise, experiment and refine a design is to materialise it in any way possible. In the process of creating and fine tuning the Ubuntu Edge, we turned to methods known to be the most effective: Model making, 3D CAD, and 3D printing. In our case, we tried it all!

 

0009

0006

D004It’s equally important how the Ubuntu Edge feels in the hand, how it visually presents itself and how certain textures give visual cues to the perceived expression. How each material works alongside each other without creating visual complexity is one of the key role to either make or break a design.

After several rounds of refinement and fine-tuning, we pressed forward with what we have now today as the Ubuntu Edge. From a rendering to visualize the Ubuntu Edge, to one that sit in front of us.

 

I hope you enjoy reading through the process, and lets make it a reality.

The Ubuntu Edge

D005

Read more
pitti

I’m happy to announce a new release 0.3 of umockdev.

The big new feature is the ability to fake character devices and provide recording and replaying of communications on them. This work is driven by our need to create automatic tests for the Ubuntu phone stack, i. e. pretending that we have a 3G or phone driver and ensuring that the higher level stacks behaves as expected without actually having to have a particular modem. I don’t currently have a phone capable of running Ubuntu, so I tested this against the standard ModemManager daemon which we use in the desktop. But the principle is the same, it’s “just” capturing and replaying read() and write() calls from/to a device node.

In principle it ought to work in just the same way for other device nodes than tty, e. g. input devices or DRI control; but that will require some slight tweaks in how the fake device nodes are set up; please let me know if you are intested in a particular use case (preferably as a bug report).

With just using the command line tools, this is how you would capture ModemManager’s talking to an USB 3G stick which creates /dev/ttyUSB{0,1,2}. The communication gets recorded into a text file, which umockdev calls “script” (yay my lack of imagination for names!):

# Dump the sysfs device and udev properties
$ umockdev-record /dev/ttyUSB* > huawei.umockdev

# Record the communication
$ umockdev-record -s /dev/ttyUSB0=0.script -s /dev/ttyUSB1=1.script \
     -s /dev/ttyUSB2=2.script -- modem-manager --debug

The –debug option for ModemManager is not necessary, but it’s nice to see what’s going on. Note that you should shut down the running system instance for that, or run this on a private D-BUS.

Now you can disconnect the stick (not necessary, just to clearly prove that the following does not actually talk to the stick), and replay in a test bed:

$ umockdev-run -d huawei.umockdev -s /dev/ttyUSB0=0.script -s /dev/ttyUSB1=1.script \
    -s /dev/ttyUSB2=2.script -- modem-manager --debug

Please note that the CLI options of umockdev-record and umockdev-run changed to be more consistent and fit the new features.

If you use the API, you can do the same with the new umockdev_testbed_load_script() method, which will spawn a thread that replays the script on the faked device node (which is just a PTY underneath).

If you want full control, you can also do all the communication from your test cases manually: umockdev_testbed_get_fd("/dev/mydevice") will give you a (bidirectional) file descriptor of the “master” end, so that whenever your program under test connects to /dev/mydevice you can directly talk to it and pretend that you are an actual device driver. You can look at the t_tty_data() test case for how this looks like (that’s the test for the Vala binding, but it works in just the same way in C or the GI bindings).

I’m sure that there are lots of open ends here still, but as usual this work is use case driven; so if you want to do something with this, please let me know and we can talk about fine-tuning this.

In other news, with this release you can also cleanly remove mocked devices (umockdev_testbed_remove_device()), a feature requested by the Mir developers. Finally there are a couple of bug fixes; see the release notes for details.

I’ll upload this to Saucy today. If you need it for earlier Ubuntu releases, you can have a look into my daily builds PPA.

Let’s test!

Read more
Jussi Pakkanen

Boost is great. We all love it. However there is one gotcha that you have to keep in mind about it, which is the following:

You must never, NEVER expose Boost in your public headers!

Even if you think you have a valid case, you don’t! Exposing Boost is 100% absolutely the wrong thing to do always!

Why is that?

Because Boost has no stability guarantees. It can and will change at any time. What this means is that if you compile your library against a certain version of Boost, all people who ever link to it must use the exact same version. If an application links against two libraries that use a different version of Boost, it will not work and furthermore can’t be made to work. Unless you are a masochist and have different parts of your app #include different Boost versions. Also: don’t ever do that!

The message is clear: your public headers must never include anything from Boost. This is easy to check with grep. Make a test script and run it as part of your test suite. Failure to do so may cause unexpected lead poisoning courtesy of disgruntled downstream developers.

Read more
pitti

I was asked to pour some love over autopilot-gtk, a GTK module to provide introspection of widget states to Autopilot. For those who don’t know, Autopilot is a QA tool to write automatic testing of GUI applications, without the race conditions and limitations that previous tools had with using only the ATK level. Please see the documentation and tutorial for more information. There are a lot of community members who do great things with it already, such as automating testing for Ubiquity or writing tests for GNOME applications like evince, gedit, nautilus, or Shotwell. This should now hopefully become easier.

Now autopilot-gtk has a proper testsuite, I triaged all bug reports, wrote reproducers for them, and fixed them all in today’s upload to Saucy. In particular, you can now do the following:

  • Access to the GtkBuilder names: Instead of having to find a particular widgets in terms of class, position, label contents, or other (sometimes) non-unique or unstable properties, you can now pick it by its unique and stable GtkBuilder name, which is the ID that most upstream code uses to manipulate widgets: b = self.app.select_single(BuilderName='entry_searchquery')
  • GtkTextBuffer type GObject properties are now translated into plain strings, which allows you to access the textual contents of a GtkTextView widget with my_textview.buffer (both for simple property access as well as for selecting by buffer contents).
  • GEnum and GFlags properties are now accessible. Enums are translated to strings (self.app.select_many('GtkButton', relief='GTK_RELIEF_HALF') or self.assertEqual(btn_greet.resize_mode, 'GTK_RESIZE_PARENT')), and flags are represented as a simple integer (like my_widget.events)); in theory we could represent them as string like FLAG_FOO | FLAG_BAR, but this becomes too unwieldy; for reliable identity matching one would always need to take care to sort them alphabetically, keep a consistent spacing, etc.
  • Please let me know if you need access to other types of properties, it is now quite easy to support more (as long as there is a reasonable way of mapping them to a standard D-BUS data type). So please report bugs.

    Read more
pitti

I released umockdev 0.2.6. Most importantly, this now fully works on ARM platforms, as we want to use it to write tests for/on the Ubuntu phone. I tested it on my Nexus 7, and the tests also succeed on the ARM Ubuntu builder (which are Panda boards). Fixing this revealed some interesting issues in recorded ioctl traces (as they are platform specific in some cases due to different word length) as well as kernel bugs in the Tegra drivers.

This version also fixes compatibility with older automake versions again, so that the daily builds for raring should work again.

I also have a new gvfs test case ready to commit which uses umockdev (if available) to test functionality of the gphoto backend. But that needs the new UMockdevTestbed.clear() API in 0.2.6, so I was holding that back. I will land it soon in upstream git now.

Read more
Jussi Pakkanen

The decision by Go to not provide exceptions has given rise to a renaissance of sorts to eliminate exceptions and go back to error codes. There are various reasons given, such as efficiency, simplicity and the fact that exceptions “suck”.

Let’s examine what exceptions really are through a simple example. Say we need to write code to download some XML, parse and validate it and then extract some piece of information. There are several different ways in which this can fail: network may be down, the server won’t respond, the XML is malformed and so on. Suppose then that we encounter an error. The call stack probably looks like this:

Func1 is the function that drives this functionality and Func7 is where the problem happens. In this particular case we don’t care about partial results. If we can’t do all steps, we just give up. The error propagation starts by Func7 returning an error code to Func6. Func6 detects this and returns an error to Func5. This keeps happening until Func1 gets the error and reports failure to its caller.

Should Func7 throw an exception, functions 6-2 would not need to do anything. The compiler takes care of everything, Func1 catches the exception and reports the error.

This very simple example tells us what exceptions really are: a reliable way of moving up the call stack multiple frames at a time.

It also tells us what their main feature is: they provide a way to centralise error handling in one place.

It should be noted that exceptions do not force centralised error handling. Any Function between 1 and 7 can catch any exception if that is deemed the best thing to do. The developer only needs to write code in those locations. In contrast to error codes require extra code at every single intermediate step. This might not seem so much in this particular case, after all there are only 6 functions to change. Unfortunately in reality things look like this:

That is, functions usually call several other functions to get their job done. This means that if the average call stack depth is N, the developer needs to write O(2^N) error handling stubs. They also need to be tested, which means writing tons of mock classes. If any single one of these checks is wrong or missing, the system has a latent bug.

Even worse, most error code handlers look roughly like this:

ec = do_something();
if(ec) {
  do_some_cleanup();
  return ec;
}

What this code actually does is replicate the behaviour of exceptions. The only difference is that the developer needs to write this anew every single time, which opens the door for bugs.

Design lesson to be learned

Usually when you design an API, there are two choices: either it can be very simple or feature rich. The latter usually takes more time for the API developer to get right but saves effort for its users. In the case of exceptions, it requires work in the compiler, linker and runtime. Depending on circumstances, either one of these may be a valid choice.

When choosing between these two it is often beneficial to step back and look at it from a wider perspective. If the simpler choice was taken, what would happen? If it seems that in most cases (say >80%) people would only use the simple approach to mimic the behaviour of the feature rich one, it is a pretty strong hint that you should provide the feature rich one (or maybe even both).

This problem can go the other way, too. If the framework only provides a very feature rich and complex api, which people then use to simulate the simpler approach. The price of good design is eternal vigilance.

Read more
Jussi Pakkanen

If you read discussions on the Internet about memory allocation (and who doesn’t, really), one surprising tidbit that always comes up is that in Linux, malloc never returns null because the kernel does a thing called memory overcommit. This is easy to verify with a simple test application.

#include<stdio.h>
#include<malloc.h>

int main(int argc, char **argv) {
  while(1) {
    char *x = malloc(1);
    if(!x) {
      printf("Malloc returned null.\n");
      return 0;
    }
    *x = 0;
  }
  return 1;
}

This app tries to malloc memory one byte at a time and writes to it. It keeps doing this until either malloc returns null or the process is killed by the OOM killer. When run, the latter happens. Thus we have now proved conclusively that malloc never returns null.

Or have we?

Let’s change the code a bit.

#include<stdio.h>
#include<malloc.h>

int main(int argc, char **argv) {
  long size=1;
  while(1) {
    char *x = malloc(size*1024);
    if(!x) {
      printf("Malloc returned null.\n");
      printf("Tried to alloc: %ldk.\n", size);
      return 0;
    }
    *x = 0;
    free(x);
    size++;
  }
  return 1;
}

In this application we try to allocate a block of ever increasing size. If the allocation is successful, we release the block before trying to allocate a bigger one. This program does receive a null pointer from malloc.

When run on a machine with 16 GB of memory, the program will fail once the allocation grows to roughly 14 GB. I don’t know the exact reason for this, but it may be that the kernel reserves some part of the address space for itself and trying to allocate a chunk bigger than all remaining memory fails.

Summarizing: malloc under Linux can either return null or not and the non-null pointer you get back is either valid or invalid and there is no way to tell which one it is.

Happy coding.

Read more