The touchscreen has become an indispensable component in more and more electronic devices such as smartphones and tablets. In a traditional architecture, a touch controller comprising an analog front-end (AFE) and a digital signal processor (DSP) is utilized as an intermediary between a touch sensor panel and a processor (e.g., a System on Chip “SoC”). The traditional touch controller may convert electrical signals generated by the touch sensor panel in response to detected touches thereto into digital signals and extract two-dimensional (e.g., x, y) coordinates associated with the touches from the digital signals. Further, the traditional touch controller may transmit the two-dimensional coordinates to the processor via a suitable interface.
An improvement upon the traditional architecture is known as the split architecture. In the split architecture, the touch controller is simplified. The simplified touch controller may comprise only an AFE, and may be capable of generating digital raw touch image data based on electrical signals generated by the touch sensor panel in response to detected touches thereto and transmitting the raw touch image to the processor via a suitable interface. Further processing of the raw touch image, such as the extraction of coordinates from the raw touch image, may be performed by the processor.
The split architecture has several advantages. For example, in the split architecture, existing hardware of the processor may be leveraged to perform functions that are performed by the DSP in the traditional architecture, thereby simplifying the touch controller and lowering the cost. Moreover, using the processor to process the raw touch image may allow for the use of more advanced algorithms and therefore may enable more sophisticated feature or functionality not possible with the traditional touch controller.
One drawback of the split architecture relates to a touchscreen-enabled function known as “touch to wake.” With touch to wake, the user may predefine gestures or symbols (referred to collectively as “gestures” hereinafter), e.g., patterns to be drawn on the touchscreen, that, when detected at the touchscreen, cause the device to wake up, unlock, and/or perform a particular predefined task (e.g., launch a particular application). Multiple gestures may be defined, and each gesture may be associated with a particular operation (e.g., wake up, unlock, perform a task, etc.). A gesture may be as simple as a letter drawn on the touchscreen.
With the traditional architecture, the touch controller may be capable of independently determining whether a drawing of a gesture is being attempted at the touch sensor panel without involving the processor. If the processor is in a sleep mode, the touch controller may still be able to independently determine whether a drawing of a gesture is being attempted, and once the touch controller positively determines that a drawing of a gesture is being attempted, it may transmit a signal to the processor to wake the processor up.
With the split architecture, however, the simplified touch controller may lack the ability to independently determine whether a drawing of a gesture is being attempted at the touch sensor panel. The determination may have to be made by the processor. Therefore, if the processor is in a sleep mode, once any touch starts occurring, the simplified touch controller would have to wake the processor up before its embedded memory is filled up by the raw touch image frames, which happens very soon due to the size of the embedded memory and of the raw touch image frames (e.g., a raw touch image frame with a resolution of 64×40 may take up more than 5 kilobytes (kB) of storage space), or risk losing the raw touch image frames corresponding to the initial portions of a gesture. Neither alternative is desirable. The waking up of the processor may ultimately prove unnecessary as a result of, e.g., the touches being accidental and not constituting a gesture, in which case precious battery energy would have been wasted on waking the processor up from the sleep mode. On the other hand, losing the raw touch image frames corresponding to potentially the initial portions of a gesture may reduce the accuracy of the result of gesture recognition, especially for complex gestures. Algorithms for extrapolating the missing portions of a gesture from the known portions are known, but the results can be inaccurate. Nor is increasing the embedded memory of the touch controller desirable as it may be too costly.