1. Field of the Invention
Embodiments of the invention relate to a method and a device for detecting a touch position, and more particularly, to a method and a device for detecting a touch position capable of accurately detecting touch coordinates and a flat panel display using the method and the device.
2. Discussion of the Related Art
Examples of a flat panel display include a liquid crystal display (LCD), a plasma display panel (PDP), a field emission display (FED), and an organic light emitting diode (OLED) display. Most of them have been put to practical use in electric home appliances or personal digital appliances and have been put on the market.
With a recent trend toward thin profile and lightness in weight of electric home appliances or personal digital appliances, a button switch as user's input means has been substituted for a touch sensor. Examples of the touch sensor include a capacitive touch sensor, a resistance touch sensor, a pressure touch sensor, an optical touch sensor, and an ultrasonic touch sensor, etc. As a kind of the optical touch sensor, an in-cell touch panel type touch sensor in which touch sensors are formed inside a pixel of a display device has been widely used.
The in-cell touch panel type touch sensor, as shown in FIG. 1, includes a sensor thin film transistor (TFT) differently generating a light current “i” depending on a touch or non-touch operation, a sensor capacitor Cst storing charges resulting from the light current “i”, and a switch TFT outputting the charges stored in the sensor capacitor Cst. In the in-cell touch panel type touch sensor, touch data generated in the touch operation is different from touch data generated in the non-touch operation. A flat panel display can detect information about a touch position of a user's finger or a touch pen based on the touch data from the in-cell touch panel type touch sensor.
The optical touch sensor has a problem that the touch data is greatly affected by an external illuminance or a shadow. To solve the problem, an optical black method and a reference image difference method were proposed. However, the optical black method cannot remove a specific deviation of a display panel, and the reference image difference method recognizes an image on a display screen or an image reflected by a polarizing plate as a basic receiving light pattern. Accordingly, a frame difference method has been recently proposed so as to solve the problem. In the frame difference method, touch frame data input in a previous frame is subtracted from touch frame data input in a current frame to generate new difference data. Then, a meaning touch boundary portion is calculated using a determined threshold value as a parameter.
To obtain a touch coordinate required in the flat panel display, a quadratic curve of a portion corresponding to a user's fingertip has to be found among the calculated touch boundary portion. Most of quadratic curve detecting algorithms require performing a matrix operation of N×M size on image data of N×M size. In particular, a determination matrix operation, an eigenvector matrix operation, and an eigenvalue matrix operation are required. The determination matrix operation is an operation performed in an iteration manner, and the eigenvector matrix operation requires an inverse matrix. Therefore, even if a matrix operation is performed on a portion of the image data, operation time algorismically increases to infinity.
To reduce operation time required to detect the touch coordinate, an upward priority manner and an edge light amount decision manner are known.
In the upward priority manner, as shown in FIG. 2A, while the difference data is scanned from the top of the difference data in an arrow direction, a first sensed effective point (corresponding to a circle in FIG. 2A) is recognized as a touch coordinate through the user's fingertip. However, the upward priority manner may be limitedly applied only when the user's finger approaches from the bottom to the top. For example, when the user's finger approaches from the side as shown in (a) of FIG. 2B, or when the user's finger approaches from the top to the bottom as shown in (b) of FIG. 2B, the upward priority manner has a great problem in detecting the touch coordinate.
In the edge light amount decision manner, as shown in FIG. 3A, edge sensor areas are formed using touch sensors, and a priority scanning direction is determined on the assumption that if the user's finger approaches, a shadow is generated in the user's finger and thus the shadow of approaching direction is reflected in the edge sensor areas. In (a) of FIG. 3A, because an output value of a downward positioned edge sensor area is different from output values of other edge sensor areas, the difference data is scanned in an upward priority scanning manner. In (b) of FIG. 3A, because an output value of a left-handed edge sensor area is different from output values of other edge sensor areas, the difference data is scanned in a right priority scanning manner. In (c) of FIG. 3A, because an output value of a right-handed edge sensor area is different from output values of other edge sensor areas, the difference data is scanned in a left priority scanning manner. In (d) of FIG. 3A, because an output value of an upward positioned edge sensor area is different from output values of other edge sensor areas, the difference data is scanned in a downward priority scanning manner. Then, a first sensed effective point (corresponding to a circle in FIG. 3A) is recognized as the touch coordinate through the user's fingertip.
In the edge light amount decision manner, four edge sensor areas in each panel have to be initialized at the same level, so as to accurately determine a scanning direction using a deviation according to a shadow of the user's finger. However, even if the four edge sensor areas in each panel are initialized, it is very difficult to completely overcome a specific deviation between the panels, a deviation due to an external noise, a deviation according to a display image on the screen, etc. As a result, the touch coordinates may be detected from the difference data in a state of a deviation greater than the deviation according to the shadow of the user's finger.
Further, as shown in (a) of FIG. 3B, when the user's finger is over two edge sensor areas, it is difficult to detect the touch coordinate corresponding to the user's fingertip in the edge light amount decision manner. As shown in (b) and (c) of FIG. 3B, when the user's finger overlaps one edge sensor area, a wrong position may be detected as the touch coordinate by wrong determining the scanning direction.