1. Field of the Invention
The present invention relates to a device for generating a signal by detecting facial movement and operating method thereof. In particular, the present invention relates to a device and an operating method for determining the intention of a user by detecting his/her mouth movement and generating a corresponding signal.
2. Description of the Related Art
There are many joints in human hands so that diverse actions can be made. Thus, general machines or vehicles are designed for manual operating, such as keyboards, steering wheels, buttons, holders, etc. However, some physically disabled persons can not operate those devices designed for manual operation thereby causes many problems of daily living. On the other hand, even a non-physically disabled persons sometimes are under the conditions of failing to operate a device by their hands, such as when they are lifting heavy loads with both of hands and are difficult to open a door, when they are driving and using a cell phone at the same time is prohibit to ensure safety, or when a user is suffering a kidnapping and shall make a non-obvious way to seek help. Therefore, many manufacturers are racking their brains to find a solution to the problem of how to interact with an electronic device without using hands.
By comparison with hands, human's face can also be controlled to have various expressions. Unlike hands, the facial actions cannot operate the devices by directly contact actions, such as pushing, pressing, pulling, etc.; however, with the development of image processing technologies, there has been developed some technologies which analyze the facial movements for generating a signal to operate electronic devices. For example, Taiwan patent No. 565754 discloses an eye-controlled driving assistant system, which controls the tools, such as electrical wheel chair, by detecting eyes movement. Therefore, even a physically disabled person can drive an electric wheelchair with a simple operating method. The system captures the image of eyes effectively, and computes the exact coordinate location on the screen, and a user controls the driving of the vehicles by winking actions which works as a signal to the system. However, the eyes action is not as nimble as hands action. The movement of eyes action is only monotonous moving around, opening and closing, thus the expressions of eyes actions are limited. In addition, when a user is under an accident or is being coerced, the user needs a simple way to seek help by facial movements. In respect to the prior art, the signals can only being used to assist driving system, but is unable to deliver such complicated message. Therefore, it is known that the existing technology of detecting facial movement for generating signal has such a problem that it is difficult to express a complicated intention and is also difficult to transmit massages. To this problem, the prior art have not provided an effective solution.