(1) Field of the Invention
The present invention relates to an image display device and a display control method thereof. More particularly, the present invention relates to an image display device that visually recognizes a user's gesture to enhance the user-friendliness of an interface giving instructions to an electronic device, and a display control method thereof.
(2) Description of the Related Art
In the past, users of a TV, a video recorder, or other video device or a PC or other information processing device generally entered data or commands with a keyboard or a pointing device, such as a mouse, or performed a channel selection or image display procedure with a remote controller.
Due to the improvement of an image recognition technology in recent years, however, a method of visually recognizing a user's gesture, determining the user's intention in accordance with the result of recognition, and operating a device accordingly has been proposed particularly in the field of gaming devices and operating guide devices.
An image recognition device disclosed, for instance, in Japanese Patent No. 4318056 recognizes the form and motion of hands and fingers and identifies an intended operation.
The image recognition device disclosed in Japanese Patent No. 4318056 creates an operating plane on a marker corresponding to a body position, and recognizes instructions in accordance with the positional movement of a hand or finger in the operating plane. The operating plane is a virtual operating plane. According to Paragraph 0033 of Japanese Patent No. 4318056, “an operator 102 can perform an input procedure with ease by extending his/her hand 601 to an operating plane 701 which is virtually defined in accordance with a marker 101, or by moving the hand 601 so as to touch a part of the screen of a monitor 111 and the operating plane 701 in the same manner as for a touch panel.”
However, the image recognition device disclosed in Japanese Patent No. 4318056 has the following disadvantages because it defines the operating plane in accordance with the body position:
(1) The timing of calibration cannot be determined with ease because the position of the operating plane is determined before the operator extends his/her hand. If, in particular, there are two or more persons in front of the screen, it is practically impossible to select one person as a target for which an operating region is to be defined.(2) Processing load is increased because the body position is to be recognized.(3) Positioning is difficult to achieve when, for instance, the operator is in a lying position.
Another conceivable method is to recognize a user's hand and define the region of an operating plane in accordance with the position of the hand. The processing load is low when merely the hand is to be recognized. The reason is that recognition can be achieved with relative ease by grasping the characteristics of the hand.
However, when the above method is used to determine the position of the operating region, it is at a disadvantage in that the timing of calibration cannot be determined with ease.
FIG. 8 illustrates a case where the operating region is created near the waist of a user when the user extends his/her hand from below.
When, for instance, the user extends his/her hand from below, the operating region 80 is created near the user's waist as the moving hand is positioned close to the waist, as shown in FIG. 8. The operating region 80 created in this manner is positioned apart from an operating region 81 that is presumed to be handled by the user. Consequently, the user cannot perform an operation with ease.
The present invention has been made to address the above problems, and provides an easy-to-operate image display device that recognizes a user's hand and defines an operating region in accordance with the user's intention without imposing a significant processing load on itself.