Field of the Invention
The present invention relates to an information processing apparatus that performs specific processing with a motion (gesture) of a finger of a user, and a method of controlling the information processing apparatus.
Description of the Related Art
As an input operation method of a television receiver, a recorded video reproducing apparatus, a remote conference system, and the like, a gesture input operation method using a motion of a finger of a user or a body expression has appeared. The gesture input operation method is to pick up an image of the motion (gesture) of the finger of the user, identifies a pattern of a locus of the motion of a specific portion (for example, a tip portion of the finger) from picked-up picture data, and inputs a value or an operation command corresponding to the identified pattern.
Japanese Patent Application Laid-Open No. 2012-098987 describes a gesture identification device that activates a gesture input when the finger is positioned outside a start/end point input determination region on image data obtained by picking up an image of the user. Further, Japanese Patent Application Laid-Open No. 2012-098987 describes that the start/end point input determination region is enlarged larger than an initial size when the position of the finger is within the start/end point input determination region, and the size of the start/end point input determination region is returned to the initial size when the finger is positioned outside the start/end point input determination region.
Japanese Patent Application Laid-Open No. 2012-146236 describes a gesture input device that activates operation control with a gesture only when the finger of the user exists within a gesture identification region set in advance in a real space.
US Patent Application Publication No. 2013/0016070 describes an input operation method to a head mounted display (HMD)-type information processing terminal by projecting a graphical user interface (GUI) on a real object such as an arm or a hand, and detecting a touch to the projected GUI.
In the conventional technologies, a problem still exists, in which it is difficult to distinguish a gesture performed by the user with an intension of an input operation and a motion of a finger without a purpose of the input operation.
The technology disclosed in Japanese Patent Application Laid-Open No. 2012-146236 sets a gesture identification region in advance, determines the motion of the finger in the region as a “gesture performed with an intension of an input operation”, and determines the motion of the finger outside the region as an “another operation”. In this technology, all of motions of the finger in the gesture identification region are identified as gestures. For example, this technology cannot distinguish a movement of the finger of running a pencil of when the user intends to input a character (a movement of the finger that inputs a line that configures the character), and a movement of the finger from stopping a pencil (so-called “stopping”) to starting a pencil (so-called “typing”). Further, this technology cannot distinguish a so-called “pen on” state in which an input of drawing is being performed, and a “pen off” state in which the input is not performed, of when some sort of drawing is performed.