1. Technical Field
The disclosed embodiments relate in general to user interfaces of computing devices and, more specifically, to systems and methods for enabling gesture control based on detection of occlusion patterns.
2. Description of the Related Art
Gesture input is now a common way to interact with computer systems. Examples of devices that utilize gestures for user interaction include touchscreens on smartphones and tablets as well as in-the-air touchless gesture controllers for gaming systems. Exemplary systems for providing simple and low-cost gesture interaction capability for projected displays using a camera, are described, for example, in Kane, S. K., D. Avrahami, J. O. Wobbrock, B. Harrison, A. D. Rea, M. Philipose, and A. LaMarca, Bonfire: a nomadic system for hybrid laptop-tabletop interaction, Proc. of UIST '09, pp. 129-138 and Kjeldsen, R., C., Pingali, G., Hartman, J., Levas, T., Podlaseck, M, Interacting with steerable projected displays, Intl. Conf. on Automatic Face and Gesture Recognition (FGR '02), pp. 402-407.
From the user's point of view, touching a solid surface has clear advantages: it gives the user direct and physical feedback. Furthermore, the solid surface of the user interface device provides some support to the user's arm that reduces fatigue. In contrast, touchless gesturing near the surface has benefits in scenarios where the user's physical contact with the surface should be avoided for various reasons, such as in hospitals, factories, kitchens, or in other public places. Another situation in which touchless user interfaces are more desirable is when the security is an issue because touch residues are susceptible to “smudge attacks,” as described, for example, in Aviv, A., Gibson, K., Mossop, E., Blaze, M., Smith, J. Smudge attacks on smartphone touch screens, Proc. of 4th USENIX Workshop on Offensive Technologies (WOOT '10).
U.S. patent application Ser. No. 13/865,990 describes an approach to enable touch and touchless interaction on a surface that uses the camera to monitor the graphical user interface widgets instead of tracking the fingers or hands. The widgets are designed with hotspots on them, and as the user makes a gesture over the widget, the system looks for patterns of occlusion over the hotspots (or more precisely, over several sensor pixels inside the hotspot). The hotspots are designed to be visually salient and provide feedback to the user. The advantages of the latter approach are that it has better perceived affordance than in-the-air gestures, it can be easily setup and calibrated, and is computationally more efficient than finger tracking. However, the system described in the aforesaid U.S. patent application Ser. No. 13/865,990 only supports user interaction widgets for generating discrete events (e.g. button clicks).
On the other hand, as would be appreciated by those of skill in the art, it would be advantageous to have a system that would support continuous user-interaction events, such as panning and zooming of an image. Some exemplary use scenarios for such a system would include viewing maps, photos, medical images, architectural drawings and the like. Thus, new and improved gesture based user interfaces with improved robustness of gesture detection are needed to handle these types of user applications.