A current personal computer system uses as a user interface a pointing device such as a mouse, track pad, or the like. However, the user must hold the mouse and slide it on a given surface. On the other hand, the user must rub against the surface of the track pad with his or her hand. Thus, these pointing devices limit user's actions. A GUI (Graphical User Interface) used in the personal computer system or the like is that for a two-dimensional space, and is not suitable for that in a three-dimensional space.
For these reasons, it is a common practice in the technical field of VR (Virtual Reality) or AR (Augmented Reality) to input commands to the system by switch operations of an input device which is held by a user's (player's) hand and has switch buttons, and the like.
In the prior art in which commands are input by switch operations on the input device having button switches and the like, the number of types or number of commands (instructions) is limited by the number of buttons. If the number of types or number of commands (or instructions) is increased, the number of buttons increases inevitably. As a result, the input device becomes large in size, and the load on the user (player) becomes heavier as he or she must learn the button positions.
Learning the button positions imposes a heavy load on the user since the command contents do not match the button positions. To put it differently, expressing various command contents (e.g., “forward movement”, “backward movement”, “stop”, and the like) by one operation, i.e., depression of a button (or a switch) is difficult if not impossible.
On the other hand, in the VR (Virtual Reality) or AR (Augmented Reality) field, a device for simulating user's (player's) hand actions has been proposed. For example, in one technique, a sensor for detecting the bent angle of a joint of a finger is attached to a hand of the user, and a CG (computer graphic) image is generated in correspondence with the bent angle of the finger detected by that sensor. However, this technique aims at simulating hand actions of the user, and it is impossible to apply this technique to that for recognizing the user (player) instructions (or commands) in practice.
In this technique, for example, when the user stretches the arm forward, a CG image with the arm stretched forward is generated, and display of such image can be interpreted to be a result of a user instruction or forward stretching of the arm in the broad sense. However, if user commands are generated by only hand actions, every hand actions are unwantedly interpreted as commands, and such interface has poor reliability.
The present invention has been made to solve the conventional problems and has as its object to provide a user interface apparatus, user interface method, and game apparatus, to which the user (player) can easily sensuously become accustomed, and which can accurately recognize instructions (commands) that the user (player) intended.