The invention relates to two-dimensional (2D) touch sensors, in particular to the processing of data acquired from such sensors.
Various different technologies are used for 2D touch sensors, notably resistive capacitive. Independently of which technology is used, 2D touch sensors generally have a construction based on a matrix of sensor nodes that form a 2D array in Cartesian coordinates, i.e. a grid.
Two of the most active development areas in capacitive touch sensing are the related topics of multi-touch processing and gesture recognition, or “multi-touch” and “gestures” for short.
Multi-touch refers to the ability of a 2D touch sensor to be able to sense more than one touch simultaneously. Basic touch sensors are designed to assume that only one touch is present on the sensor at any one time, and are designed to output only one x,y coordinate at any one time. A multi-touch sensor is designed to be able to detect multiple simultaneous touches. The simplest form of a multi-touch sensor is a two-touch sensor which is designed to be able to detect up to two simultaneous touches.
It will be appreciated that two-touch detection is essential for even basic keyboard emulation, since a SHIFT key is required to operate a conventional keyboard. Further, many gestures require two-touch detection. Although the term “gesture” is perhaps not well defined in the industry, it is generally used to refer to user input that is more sophisticated than a single “tap” or “press” at a particular location. A very simple gesture might be a “double tap” when the user quickly touches and releases the touch surface twice in quick succession. However, usually, when gestures are referred to it is in connection with touch motions. Example single touch motion gestures are “flick” and “drag”, and example two-touch motions are “pinch”, “stretch” and “rotate”.
Clearly, a multi-touch sensing capability is a pre-requisite for being able to provide for any gestures based on two or more simultaneous objects.
Existing approaches to gesture recognition are based on supplying a gesture recognition algorithm with a time series of touch coordinates from the touch sensor, the data set from each time interval being referred to as a frame, or frame data in the following. For example, the outcome of the processing four sequential time frames t1 t2 t3 and t4 to track motion of up to three touches on a touch sensor could be the following data:
TABLE 1Touches/objects adjacenttouch panelTime frame123t1(4, 7)(4, 3)(10, 7) t2(3, 7)(2, 4)(6, 6)t3(1, 8)(—, —)(5, 7)t4(3, 8)(—, —)(6, 7)
In the table example (x, y) coordinates of three tracked touches are shown, where the second touch ceased at time t3. Of course the time series would continue over a very large number of time increments.
The task of the gesture recognition algorithm is to analyze the frame data: first to identify which gestures are being input by the user; and second to parameterize the recognized gestures and output these parameters to a higher level of the processing software. For example, a gesture recognition algorithm may report that the user has input a “rotate” gesture which expresses a rotation through a 67 degree angle.