Introducing libcolumbus, a fast online approximate matching library

Every day computer users do a variation of a simple task: selecting an element from a list of choices. Examples include installing packages with ‘apt-get install packagename’, launching an application from the Dash with its name, selecting your country from a list on web pages and so on.

The common thing in all these use cases is intolerance for errors. If you have just one typo in your text, the correct choice will not be found. The only way around is to erase the query and type it again from scratch. This is something that people have learned to do without thinking.

It’s not very good usability, though. If the user searches for, say, Firefox by typing “friefox” by accident, surely the computer should be able to detect what the user meant and offer that as an alternative.

The first user-facing program in Ubuntu to offer this kind of error tolerance was the HUD. It used the Levenshtein distance as a way of determining user intent. In computer science terminology this is called approximate string matching or, informally, fuzzy matching.

Once the HUD was deployed, the need to have this kind of error correction everywhere became apparent. Thus we sent out to create a library to make error tolerant matching easy to embed. This library is called libcolumbus.

Technical info

Libcolumbus has been designed with the following goals in mind:

  • it must be small
  • it must be fast
  • it must be easy to embed
  • it is optimized for online typing

The last of these means that you can do queries at any time, even if the user is still typing.

At the core of libcolumbus is the Levenshtein distance algorithm. It is a well known and established way of doing fuzzy matching. Implementations are used in lots of different places, ranging from heavy duty document retrieval engines such as Lucene and
Xapian all the way down to Bash command completion. There is even a library that does fuzzy regexp matching.

What sets Columbus apart from these are two things, both of which are well known and documented but less often used: a fast search implementation and custom errors.

The first feature is about performance. The fast Levenshtein implementation in libcolumbus is taken almost verbatim from this public domain implementation. The main speedup comes from using a trie to store the words instead of iterating over all items on every query. As a rough estimate, a brute force implementation can do 50-100 queries a second with a data set of 3000 words. The trie version can do 600 queries/second on a data set of 50000 words.

The second feature is about quality of results. It is best illustrated with an example. Suppose there are two items to choose from, “abc” and “abp”. If the user types “abo”, which one of these should be chosen? In the classical Levenshtein sense both of the choices are identical: they are one replace operation away from the query string.

However from a usability point of view “abp” is the correct answer, because the letter p is right next to the letter o and very far from the letter c. The user probably meant to hit the key o but just missed it slightly. Libcolumbus allows you to set custom errors for these
kinds of substitutions. If the standard substitution error is 100, one could set the error for substitution error for adjacent keys to a smaller value, say 20. This causes words with “simple typos” to be ranked higher automatically.

There are several other uses for custom errors:

  • diacritical characters such as ê, é and è can be mapped to have very small errors to each other
  • fuzzy number pad typing can be enabled by assigning mapping errors from the number to corresponding letters (e.g. ’3′ to ‘d’, ‘e’ and ‘f’) as well as adjacent letters (i.e. those on number keys ’2′ and ’6′)
  • spam can be detected by assigning low errors for letters and numbers that look similar, such as ’1′ -> ‘i’ and ’4′ -> ‘a’ to match ‘v14gr4′ to ‘viagra’

Libcolumbus contains sample implementations for all these except for the last one. It also allows setting insert and delete errors at the beginning and end of the match. When set to low values this makes the algorithm do a fuzzy substring search. The online matching discussed above is implemented with this. It allows the library to match the query term “fier” to “firefox” very fast.

Get the code

Our goal for the coming cycle is to enable error tolerant matching in as many locations as possible. Those developers who wish to try it on their application can get the source code here.

The library is implemented in C++0x. The recommended API to use is the C++ one. However since many applications can not link in C++ libraries, we also provide a plain C API. It is not as extensive as the C++ one, but we hope to provide full coverage there too.

The main thing to understand is the data model. Libcolumbus deals in terms of documents. A document consists of a (user provided) document ID and a named collection of texts. The ID field is guaranteed to be large enough to hold a pointer. Here’s an example of what a document could look like:

id: 42
  name: packagename
  description: This package does something.

Each line is a single word field name followed by the text it contains. A document can contain an arbitrary number of fields.  This is roughly analogous to what MongoDB uses. It should be noted that libcolumbus does not read any data files. The user needs to create document objects programmatically. The example above is just a visualisation.

When the documents are created and passed to the main matcher object for processing, the system is ready to be queried. The result of queries is a list of document IDs and corresponding relevancies. Relevancy is just a number whose meaning is roughly “bigger relevancy means better”. The exact values are arbitrary and may change even between queries. End-user applications usually don’t need to bother with them.

There is one thing to be mindful, though. The current implementation has a memory backend only. Its memory usage is moderate but it has not yet been thoroughly optimized. If your data set size is a few hundred unique words, you probably don’t have to care. A few thousand takes around 5 MB which may be a problem in low memory
devices. Tens of thousands of words take tens of megabytes which may be too much for many use cases. Both memory optimizations and a disk backend are planned but for now you might want to stick to smallish data sets.