Computing devices, such as notebook computers, personal data assistants (PDAs), kiosks, and mobile handsets, have user interface devices, which are also known as human interface devices (HID). One user interface device that has become more common is a touch-sensor pad (also commonly referred to as a touchpad). A basic notebook computer touch-sensor pad emulates the function of a personal computer (PC) mouse. A touch-sensor pad is typically embedded into a PC notebook for built-in portability. A touch-sensor pad replicates mouse X/Y movement by using two defined axes which contain a collection of sensor elements that detect the position of a conductive object, such as a finger. Mouse right/left button clicks can be replicated by two mechanical buttons, located in the vicinity of the touchpad, or by tapping commands on the touch-sensor pad itself. The touch-sensor pad provides a user interface device for performing such functions as positioning a pointer, or selecting an item on a display. These touch-sensor pads may include multi-dimensional sensor arrays for detecting movement in multiple axes. The sensor array may include a one-dimensional sensor array, detecting movement in one axis. The sensor array may also be two dimensional, detecting movements in two axes.
One type of touchpad operates by way of capacitance sensing utilizing capacitance sensors. The capacitance, detected by a capacitance sensor, changes as a function of the proximity of a conductive object to the sensor. The conductive object can be, for example, a stylus or a user's finger. In a touch-sensor device, a change in capacitance detected by each sensor in the X and Y dimensions of the sensor array due to the proximity or movement of a conductive object can be measured by a variety of methods. Regardless of the method, usually an electrical signal representative of the capacitance detected by each capacitive sensor is processed by a processing device, which in turn produces electrical or optical signals representative of the position of the conductive object in relation to the touch-sensor pad in the X and Y dimensions. A touch-sensor strip, slider, or button operates on the same capacitance-sensing principle.
Another user interface device that has become more common is a touch screen. Touch screens, also known as touchscreens, touch panels, or touchscreen panels are display overlays which are typically either pressure-sensitive (resistive), electrically-sensitive (capacitive), acoustically-sensitive (surface acoustic wave (SAW)) or photo-sensitive (infra-red). The effect of such overlays allows a display to be used as an input device, removing the keyboard and/or the mouse as the primary input device for interacting with the display's content. Such displays can be attached to computers or, as terminals, to networks. There are a number of types of touch screen technologies, such as optical imaging, resistive, surface acoustical wave, capacitive, infrared, dispersive signal, piezoelectric, and strain gauge technologies. Touch screens have become familiar in retail settings, on point-of-sale systems, on ATMs, on mobile handsets, on kiosks, on game consoles, and on PDAs where a stylus is sometimes used to manipulate the graphical user interface (GUI) and to enter data.
A first type of conventional touchpad is composed of a matrix of rows and columns. Within each row or column, there are multiple sensor elements. However, all sensor pads within each row or column are coupled together and operate as one long sensor element. The number of touches a touchpad can detect is not the same as the resolution of the touchpad. For example, even though a conventional touchpad may have the capability to detect two substantially simultaneous touches with an XY matrix, the conventional touchpad cannot resolve the location of the two substantially simultaneous touches. The only conventional way to resolve the location of a second touch is if the touches arrive sequentially in time. This allows the remaining potential locations to be evaluated to determine which locations are “actual touch” locations and which are invalid touches, also referred to as “ghost touch” locations. If both touches arrive or are detected substantially simultaneously, there is no way to resolve which of the two pairs of potential locations constitute “actual” touches, instead of invalid touches (e.g., “ghost” touches). Thus, the conventional two-axis touchpads are configured to resolve only a location of a single touch. Similarly, conventional touch screens are designed to detect the presence and location of a single touch.
In its minimalist form, multi-touch detection requires a two-layer implementation: one to support rows and the other columns. Additional axes, implemented on touch screens using additional layers, can allow resolution of additional simultaneous touches, but these additional layers come at a significant cost both in terms of materials and yield loss. Likewise the added rows/columns/diagonals used in multi-axial scanning may also take additional time to scan, and more complex computation to resolve the touch locations.
Conventional two-layer XY matrix touchpad/touchscreen designs are typically arranged as two independent linear sliders, placed physically orthogonal to each other, and substantially filling a planar area. Using a centroid-processing algorithm to determine the peak in sensed capacitance, one slider is used to determine the X location of a finger touch and the second slider is used to determine the Y location of the touch. This is shown in FIG. 1A, where the single touch 101 represents the location of the operator's finger on the touchpad or touch screen.
FIG. 1A illustrates a detection profile 100 of a single touch 101 with the first type of conventional touchpad 110 noted above, as detected when scanning the rows and columns of an XY matrix. The location of the touch 101 on the Y-axis is determined from the calculated centroid of additional capacitance (e.g., 1st maximum 121) of the scanned rows in the matrix, and the location on the X-axis is determined from the calculated centroid of additional capacitance (e.g., 1st maximum 131) of the scanned columns of the same matrix. Conventional methods can be used to determine the location of a single finger anywhere on the touch screen.
When a second finger is placed on the touch sensitive area, this technique can still be applied, however, multiple restrictions apply. If the two fingers are on exactly the same axis (X or Y), the centroid algorithm can be modified to determine the location of two peaks on the alternate axis and thus report correct X/Y co-ordinates of both fingers. FIG. 1B shows an example of two fingers at different points on the X-axis, but on the same Y-axis co-ordinate. The same concept applies if both fingers are on the same X-axis co-ordinate but in different locations on the Y-axis. In both cases, the location of both fingers can be determined.
This sensing does have issues when the two touches share a common centroid peak, but are not on the exact same horizontal or vertical axis. When this happens, the shared capacitance peak tends to be slightly wider than that of a single touch, but a single centroid is calculated at some mathematical mean location between the two touches. The reported positions are no longer accurate for either touch when their exact location needs to be known.
Other problems arise when the second touch is on a different location on the X-axis and a different location on the Y-axis. FIG. 1C shows two different cases for the physical touch location of two fingers, but note that the centroid calculation produces the exact same result. Therefore, in this situation, the touch screen controller cannot determine exactly where the two fingers are located. The algorithm produces two possible results for two fingers, and multiple potential results for three fingers and even a possible 4-finger combination. For example, when a second touch 102 occurs, a second maximum (e.g., second maximum 122 and second maximum 132) is introduced on each axis, as shown in FIG. 1B. The second touch 102 possibly introduces two “ghost touch” locations 103, introducing multiple potential touch combinations.
From these dual-maxima, it is possible to infer the following as potential touch combinations that could generate the detected-touch response: two fingers, one on each of the black circles; two fingers, one on each of the hashed circles; three fingers, at any combination of the four circles (four possible combinations); four fingers, one at each circle. Of these seven possible combinations, it may not be possible to determine a) which of them is the present touch type, and b) (with the exception of the four finger combination) where the real touches are located.
A second type of conventional touchpad is composed of an XY array of independent sense elements, where each sensor element in a row or column is separately sensed. Here, each row and column is composed of multiple sensing elements, each capable of independent detection of a capacitive presence and magnitude. These may then be used to detect any number of substantially simultaneous touches. The drawback to this second type of conventional touchpad is the sheer number of elements that must be independently sensed, scanned, and evaluated for capacitive presence. For example, the first type of conventional touchpad including an array of ten by ten coupled sensor elements would require sensing and evaluation of twenty elements (ten rows and ten columns) to determine presence and location of touch. This same area, implemented as an all-points-addressable (APA) array (i.e., second type of conventional touchpad), would require one hundred evaluations (10×10=100) to also determine the location of a single touch, which is five times the number of the first type of conventional touchpad.
In addition to the processing and scanning time requirements of the second type of conventional touchpad, there is also the issue of physical routing. In an XY row/column arrangement, it is possible to use the sensing elements themselves as a significant part of the routing (at least for those parts occurring within the touch sensing area). With an APA array, each sensed location requires a separate wire or trace between the sensor element and the controller containing the capacitance sensing circuitry connected to the touchpad. In larger arrays, this can consume large routing resources. When implemented on transparent touch screens (where vias are not normally supported), it may not be physically possible to route all necessary sensor elements to the edge of the sensing grid to allow connection to a capacitance sensing controller.
One known solution to this second touch problem requires that the fingers do not hit the touch screen at the exact same instance in time. When the first finger is detected, its X/Y co-ordinate is calculated as usual. When the second finger touches, creating a second centroid on both axes, the centroid algorithms would generate two possible solutions as shown in FIG. 1C. However, since the location of the first finger is already known, the exact location of the second finger can be deduced. There are multiple drawbacks to this solution. For example, it is indeed possible that both fingers hit the touch screen at the exact same time. Also, since the row and column sensors are scanned sequentially, and given that each individual scan may take on the order of 1 millisecond (msec) or more, it is possible that the time taken to scan the entire touch screen and calculate the centroid could be as much as 20-30 msec. This can be thought of as the “sample” rate of the touch screen. Thus, even if the two fingers come into contact with the touch screen 30 msec apart, it is possible that they are recognized at the same time and appear to the touch screen controller as being truly simultaneous, and impossible to resolve. In addition, a third finger may be present that shares these same centroids and goes undetected. If, as the two touches are moved to perform a function, they ever line up to share a common axis, the orientation of the valid vs. invalid (“ghost”) touches is lost. For this timing-based solution to work, the user must be educated to deliberately touch with one finger first and add the second finger after some delay. This presents a usability drawback.