In recent, due to the rapid development and spread of smart devices, public interests in the interface technology for operation of smart devices are being rapidly increasing. In order o reflect the trend, concentrated research and investment in the intelligent user interface realm have been made in each industrial field.
While the intelligent user interface has been researched over significantly long time, technical demand for the intelligent user interface is further increasing with the recent growth of the market for smart devices.
Among the intelligent interface technologies, a gesture interface technology can best reflect convenience and intuitiveness of users. As the most representative example for the gesture interface technology, there is the Kinect sensor developed by the Microsoft Corporation. The Kinect sensor is a technology realizing a real-time interactive game by combining a RGB camera and an infrared ray camera sensor with each other to recognize gestures and motions of users. Thanks to supply of low-cost hardware of the Kinect sensor and provision of the published library, many applicable gesture recognition technologies could have been developed.
Meanwhile, the gesture recognition technology can be largely divided into a technology, which recognizes a static gesture such as a hand pose by detecting the hand, and a technology, which recognizes a dynamic gesture by using movement trajectory of a hand. However, for such a gesture recognition technology, a stage for segmenting and detecting a hand region from an image should be preceded. To this end, most researches are being conducted on a method using color image information, a method using depth image information, and a method mixing color and depth information.
Among those methods, the gesture recognition method using color image information has been variously researched since it can most easily handle information, which can be acquired from an image. The method using color image information is advantageous in that it can rapidly detect a hand, but has a serious drawback in that it is vulnerable to change in a lighting condition and environments.
In order to overcome the drawback, researches on combining color image information and depth image information with each other have been conducted, but this method is problematic in that since it depends on a color image in a pre-processing stage, it is highly sensitive to a lighting condition.
In addition, researches on using a depth image alone are being conducted, but problematic in that they require the precondition that a hand should be located in the foremost position from a camera, and distance information for separating an arm region and a hand from each other cannot be easily discriminated.
In this regard, Korean Patent Application Publication No. 2013-0043394 (Title of Invention: METHOD OF IMAGE PROCESSING FOR DETECTING OBJECT, DEVICE, METHOD FOR USER INTERFACE AND USER INTERFACE THEREOF) describes extracting an object only by using depth information of an image acquired from a stereo camera or others.
In addition, Korean Patent Application Publication No. 2013-0050672 (Title of Invention: Method of virtual touch using 3D camera and apparatus thereof) describes determining existence of touch by detecting a screen region separated from a peripheral region of a screen part, detecting a body part region when a figure of a touch performer is sensed on the screen region, and comparing a depth value of the screen region and a depth value of the body part region with respect to the 3-dimensional camera.