Touch sensing panels are in widespread use in a variety of applications. Typically, the touch sensing panels are actuated by a touch object such as a finger or stylus, either in direct contact, or through proximity (i.e. without contact). Sometimes a stylus is used instead. Touch sensing panels are for example used as touch pads of laptop computers, in control panels, and as overlays to displays on e.g. hand held devices, such as mobile telephones. A touch panel that is overlaid on or integrated in a display is also denoted a “touch screen”. Many other applications are known in the art.
As used herein, all different types of touch sensitive apparatuses, including touch sensing panels, touch pads, touch screens, etc, are collectively referred to as “touch systems”.
To an increasing extent, touch systems are designed to be able to detect two or more touches simultaneously, this capability often being referred to as “multi-touch” in the art.
There are numerous known techniques for providing multi-touch sensitivity. In US2010/141604 resistive wire grids are incorporated into a panel to form a tactile sensor, and in WO2009/007704 an array of capacitive sensors is used for the same purpose. WO03/041006 discloses a multi-point touch pad formed by strain gauges incorporated into a panel. There are also multi-touch systems that are based on optical detection, and many of these systems may be scaled to large sizes without adding significant cost. For example, U.S. Pat. No. 7,465,914 discloses a panel which is internally illuminated by light that propagates by total internal reflection (TIR), where the light that is scattered by a touch object is detected by means of one of more angle-sensitive light detectors arranged at the periphery of the panel. US2008/0284925 discloses the use of a similarly illuminated panel, where the touch objects are detected by imaging the light that escapes from the panel at the location of the touch objects. WO2010/006882 and WO2010/064983 disclose multi-touch systems that operate by propagating light through a panel by TIR, and by identifying the locations of touch objects based on the attenuation caused by the touches in the transmitted light. WO2006/095320 discloses a touch system in which light is propagated above the touch surface, whereby the touch objects locally block the propagating light. WO2010/056177 discloses a multi-touch system including a light guide placed over a display integrated with light sensors, whereby the light sensors are operated to locally detect light that is scattered from the light guide by objects touching the light guide. Other proposed techniques for multi-touch sensitivity uses ultrasound or surface acoustic waves, see e.g. US2010/0026667.
As the availability of multi-touch systems increases, and in particularly as these systems are made available in a wide range of sizes and enabling an increased number of simultaneous touches, it can be foreseen that software applications with advanced user interaction will be developed to be run on devices with these types of touch systems. For example, a user may be allowed to enter advanced multi-touch gestures or control commands, in which fingers on one or both hands are dragged across a touch surface, and it may be possible for several users to work concurrently on the touch surface, either in different application windows, or in a collaborative application window.
Irrespective of sensor technology, the touches need to be detected against a background of measurement noise and other interferences, e.g. originating from ambient light, fingerprints and other types of smear on the touch surface, vibrations, etc. The influence of measurement noise and interferences may vary not only over time but also within the touch surface, making it difficult to properly detect the touches on the touch surface at all times.
The combination of several touches, complex gestures as well as temporal and spatial variations in background and noise will make the identification of touches a more demanding task. The user experience will be greatly hampered if, e.g., an ongoing gesture on a touch screen is interrupted by the system failing to detect certain touches during the gesture.
The task of providing a uniform and consistent user experience may be even more demanding if the touch system has a limitation on the maximum number of simultaneous touches that may be detected with accuracy. Such a system may not be able to detect weaker touches when there are touches generating stronger measurement signals. One difficult situation may arise when more than one person operates on the touch surface. For example, in a touch system configured to detect three touches, if a second person puts down three fingers on the touch surface while a first person makes a drag with one finger, it is not unlikely that the system will ignore the remainder of the drag since a drag typically generates weaker signal levels from the touch sensors than non-moving touches.
As another example of a difficult situation, consider a touch system used for a casino application, which enables a number of players and a croupier to interact with the touch surface. If there is a limitation of the maximum number of touches, the touch system may not be able to correctly and consistently detect the interaction from the croupier unless the players are instructed to lift their hands from the touch surface.
The prior art also comprises US2010/0073318 which discloses a technique for detecting and tracking multiple touch points on a touch surface using capacitance readings from two independent arrays of orthogonal linear capacitive sensors. A touch point classifier in combination with a Hidden Markov Model (HMM) is operated on the capacitance readings to determine the number (N) of touch points on the touch surface. Then, a peak detector processes the capacitance readings to identify the N largest local maxima, which are processed by a Kalman tracker to output the location of N touch points. The Kalman tracker matches touch points determined in a current time frame with predicted locations of touch points determined in preceding time frames. For each predicted touch point, the nearest touch point in the current time frame is found in terms of Euclidian distance. If the distance exceeds a threshold, the predicted touch point is output, otherwise the nearest touch point is output.