Field of the Invention
This invention relates to an intelligent gesture interface robot (GIR) equipped with a video vision sensor to read user hand gestures, to operate computers, machines, and intelligent robots. The unique gesture reading method is that the GIR uses a puzzle cell mapping dynamic multiple sandwich layers work zone of a virtual touch screen mouse, keyboard, and control panel in the user's comfortable gesture action area. The GIR allows the user to easily move hands and push to click. It is easy to operate. It does not require the user to make hands swings action or abnormal body posting actions that the user could get hurt and hit object or others around them. The best gesture solution is my invented method using a puzzle cell mapping gesture method which is considered as a safe and efficient way. The user uses simple gesture actions to control all kind of computer machines all together. It does not require the user to remember which gesture body post for which command. The GIR displays a real time highlight on keyboard graphic image or on a display monitor for visual indication. Therefore, the user knows which command the user selected, and the user extends the user's hand forward to confirm the selection. The puzzle cell mapping gesture command method enables using simple move and click gesture actions to easily control complex multiple computer machines and robots at the same time.
Background Art
The GIR uses a puzzle cell mapping dynamic multiple sandwich layers work zone of a virtual touch screen mouse, keyboard, and control panel in a user's comfortable gesture action area. The user easily move hands and push to click. The user applies easy gesture actions to control complex machines in real time. The GIR prevents injury.