What makes computational geometry algorithms special is that they consists only of special cases.
Plane geometry algorithms are those that seem extremely obvious but are really hard to implement. Simply checking whether a point lies within a given polygon turns out to be really hard to solve exhaustively.
These sorts of problems turn up all the time when dealing with multi touch input. Let’s start with a simple example: dragging two fingers sideways. What should happen? A drag event, right? How hard can it be?
If both touches are within one window, the case is simple. But what if one touch is over a window and the other is over the desktop? Two one finger drags would seem reasonable. But what if the desktop drag goes over the window. Should it transform into a two-finger drag? Two one finger drags? Some hybrid of them?
On second thought, that seems a bit complicated. Let’s look at something simpler: a one finger drag. That’s simple, right? If you drag away from a window to the desktop, then the original window/widget gets drag events as the touch moves. Drag and drop works and everything is peachy.
But what about if the touch comes back into the same window but on a different widget? One that also has a gesture subscription. Who gets the drag events? Let’s assume that the user wanted to scroll one viewport up and another one down. It would mean that we need to end the first gesture and start a new one when moving back in over the window. But if the user was doing drag and drop, we can’t end the gesture, because then we lose source info.
So there you go, the simplest possible gesture event pops up an undecidable problem. When you deal with the combinatorics of multiple touches, things start really getting hard.