One touch

What makes computational geometry algorithms special is that they consists only of special cases.

Plane geometry algorithms are those that seem extremely obvious but are really hard to implement. Simply checking whether a point lies within a given polygon turns out to be really hard to solve exhaustively.

These sorts of problems turn up all the time when dealing with multi touch input. Let’s start with a simple example: dragging two fingers sideways. What should happen? A drag event, right? How hard can it be?

If both touches are within one window, the case is simple. But what if one touch is over a window and the other is over the desktop? Two one finger drags would seem reasonable. But what if the desktop drag goes over the window. Should it transform into a two-finger drag? Two one finger drags? Some hybrid of them?

On second thought, that seems a bit complicated. Let’s look at something simpler: a one finger drag. That’s simple, right? If you drag away from a window to the desktop, then the original window/widget gets drag events as the touch moves. Drag and drop works and everything is peachy.

But what about if the touch comes back into the same window but on a different widget? One that also has a gesture subscription. Who gets the drag events? Let’s assume that the user wanted to scroll one viewport up and another one down. It would mean that we need to end the first gesture and start a new one when moving back in over the window. But if the user was doing drag and drop, we can’t end the gesture, because then we lose source info.

So there you go, the simplest possible gesture event pops up an undecidable problem. When you deal with the combinatorics of multiple touches, things start really getting hard.

2 thoughts on “One touch

  1. This post is very interesting to me. I have very little experience with multitouch development but I can feel that those are some challenging problems that you raise. I think part of the issue is due to the fact that most applications and even windowing systems are written around mouse pointer and keyboard interaction. The whole infrastructure is too rigid to incorporate dynamics of multi-touch interaction.
    Are you familiar with Kivy? I have used it for a few multitouch experimental projects so far and its model for handling multitouch input just makes a lot of sense. Of course it is written from ground up for new types of user input such as multitouch. I wish ubuntu had a project that would aim at developing a interactive desktop environment from scratch, perhaps by using a library such as Kivy.

    Regarding finding if a point resides inside or outside a polygon, I recently had to implement that in an application. Luckily, the main library that I was using (OpenCV) provided a function for this. Search for cv::pointPolygonTest for documentation of this function. Since it is written for image processing it is very fast.

    Cheers!

Comments are closed.