Any discussion of the prior art throughout the specification should in no way be considered as an admission that such prior art is widely known or forms part of the common general knowledge in the field.
Input devices based on touch sensing (referred to herein as touch screens irrespective of whether the input area coincides with a display screen, either in whole or in part) have long been used in electronic devices such as computers, personal digital assistants (PDAs), handheld games and point of sale kiosks, and are now appearing in other portable consumer electronics devices such as mobile phones. Generally, touch-enabled devices allow a user to interact with the device by touching one or more graphical elements, such as icons or keys of a virtual keyboard, presented on a display.
Several touch-sensing technologies are known, including resistive, surface capacitive, projected capacitive, surface acoustic wave, optical and infrared, all of which have advantages and disadvantages in areas such as cost, reliability, ease of viewing in bright light, ability to sense different types of touch object, e.g. finger, gloved finger, stylus, and single or multi-touch capability.
The various touch-sensing technologies known differ widely in their multi-touch capability, i.e. their performance when faced with two or more simultaneous touch events. Some early touch-sensing technologies such as resistive and surface capacitive are completely unsuited to detecting multiple touch events, reporting two simultaneous touch events as a ‘phantom touch’ halfway between the two actual points. Certain other touch-sensing technologies have good multi-touch capability but are disadvantageous in other respects. For example projected capacitive touch screens, discussed in US Patent Application Publication No 2006/0097991 A1, only sense certain touch objects (e.g. gloved fingers and non-conductive styluses are unsuitable) and use high refractive index transparent conductive films that are well known to reduce display viewability, particularly in bright sunlight. In another example video camera-based systems, discussed in US Patent Application Publication Nos 2006/0284874 A1 and 2008/0029691 A1, are extremely bulky and unsuitable for hand-held devices. Another touch technology with good multi-touch capability is ‘in-cell’ touch, where an array of sensors are integrated with the display pixels of a display (such as an LCD or OLED display). These sensors are usually photo-detectors (disclosed in U.S. Pat. No. 7,166,966 and US Patent Application Publication No 2006/0033016 A1 for example), but variations involving micro-switches (US 2006/0001651 A1) and variable capacitors (US 2008/0055267 A1), among others, are also known. In-cell approaches cannot be retro-fitted and generally add complexity to the manufacture and control of the displays in which the sensors are integrated. Furthermore those that rely on ambient light shadowing cannot function in low light conditions.
In yet another approach to touch sensing with several possible configurations, commonly known as ‘infrared’ touch, a touch event is detected and located by the shadowing of two intersecting light paths. In one well-known configuration, illustrated in FIG. 1 and described in U.S. Pat. Nos. 3,478,220 and 3,764,813, a touch screen 2 includes arrays of discrete light sources 12 (e.g. LEDs) along two adjacent sides of a rectangular input area 4 emitting two sets of parallel beams of light 16 towards opposing arrays of photo-detectors 14 along the other two sides of the input area. If a touch object 10 in the input area blocks a substantial portion of at least one beam in each of the two axes, its location can be readily determined.
In a variant infrared touch screen 17 that greatly reduces the optoelectronic component count, illustrated in FIG. 2 and described in U.S. Pat. No. 5,914,709, the arrays of light sources are replaced by arrays of ‘transmit’ optical waveguides 18 integrated on an L-shaped substrate 20 that distribute light from a single light source 12 (e.g. an LED or a vertical cavity surface emitting laser (VCSEL)) via a 1×N splitter 21 to produce a grid of light beams 16, and the arrays of photo-detectors are replaced by arrays of ‘receive’ optical waveguides 22 integrated on another L-shaped substrate 23 that collect the light beams and conduct them to a detector array 24 (e.g. a line camera or a digital camera chip). Each optical waveguide includes an in-plane lens 26 that collimates or focuses the signal light in the plane of the input area 4, and the device may also include cylindrically curved vertical collimating lenses (VCLs) 28 to collimate the signal light in the out-of-plane direction. As in the touch screen 2 of FIG. 1, a touch object is located from the beams blocked in each axis. For simplicity, FIG. 2 only shows four waveguides per side of the input area 4; in actual touch screens the in-plane lenses will be sufficiently closely spaced such that the smallest likely touch object will block a substantial portion of at least one beam in each axis. This type of infrared touch screen will be referred to hereinafter as an ‘all-waveguide’ touch screen.
In yet another variant infrared touch screen 30 shown in FIG. 3A (plan view) and disclosed in US Patent Application Publication No 2008/0278460 A1, entitled ‘A transmissive body’ and incorporated herein by reference, the ‘transmit’ waveguides 18 and their in-plane lenses 26 of the all-waveguide device shown in FIG. 2 are replaced by a transmissive body 32 comprising a planar transmissive element 34 and two collimation/redirection elements 36 that include parabolic turning mirrors 38. Infrared light 40 from a pair of optical sources 12 (e.g. LEDs or VCSELs) is launched into the transmissive element, then collimated and re-directed by the collimation/redirection elements to produce two sheets of light 42 that propagate in front of the transmissive element towards the receive waveguides 22. The light path through the transmissive element 34 and a collimation/redirection element 36 is shown in side view in FIG. 3B. Portions of the light sheets are collected by the in-plane lenses and guided to the detector array 24, and a touch object is detected and its location and dimensions determined from the obscured portions of the light sheets 42. Clearly the transmissive element 34 needs to be transparent to the infrared light 40 emitted by the optical sources 12, and it also needs to be transparent to visible light if there is an underlying display 43 (FIG. 3B). Alternatively, a display may be located between the transmissive element and the light sheets 42, in which case the transmissive element need not be transparent to visible light. This type of infrared touch screen will be referred to hereinafter as a ‘periscopic retro-reflector’ or ‘PRR’ touch screen.
A common feature of the infrared touch screens shown in FIGS. 1 to 3 is that the sensing light is provided in two fields containing parallel light paths, either as discrete beams (FIGS. 1 and 2) or as more or less uniform sheets of light (FIG. 3). The axes of the two light fields are usually perpendicular to each other and to the sides of the input area, although this is not essential (see for example U.S. Pat. No. 5,414,413). Despite the name ‘infrared touch screens’ it should be understood that the wavelength of the sensing light need not be in the infrared region, but could be in the visible for example.
Turning now to the issue of multi-touch capability, although infrared touch screens can detect the presence of multiple touch events, they are often unable to determine their locations unambiguously. In general, n simultaneous touch events will be detected as n2 ‘candidate points’, of which n(n−1) will be ‘phantom points’. For the simplest multi-touch situation n=2 (‘double touch’), the response of a FIG. 1 infrared touch screen 2 is illustrated in FIG. 4. The ‘candidate points’ include the two actual touch points 10 and two ‘phantom points’ 44 at the corners of a rectangle, and it can be difficult if not impossible to identify the correct pair without further information. It will be appreciated that the variant infrared touch screens of FIGS. 2 and 3 will also respond in the manner shown in FIG. 4. In some circumstances the correct pair can be identified via some form of extra information; for example as explained in U.S. Pat. No. 6,856,259, touch-down and lift-off timing, relative object sizes and expected touch locations can all be of use in resolving an ambiguity.
However even if the correct pair can be identified, say because one touch-down occurred before the other, further complications can arise if the detection system has to track moving touch objects. For example if two moving touch objects (FIG. 5A) on an infrared touch screen 2 move into an ‘eclipse’ state (FIG. 5B), the ambiguity between the actual points 10 and the phantom points 44 recurs when the objects move out of the eclipse state. FIGS. 5C and 5D illustrate two possible motions out of the eclipse state of FIG. 5B that, without further information, are indistinguishable to the touch screen controller.