Apparatuses (called mixed reality apparatuses) applying a mixed reality (MR) technique, which combines the real world with a virtual world in a natural way without giving sense of oddness, have increasingly been proposed. These mixed reality apparatuses synthesize an image of a virtual world rendered by computer graphics (CG) with the real world sensed by an image sensing apparatus, e.g., a camera, and display the synthesized image on a display device, e.g., a head-mounted display (HMD), thereby exhibiting mixed reality to a user.
In order to generate an image of a virtual world that goes along the change in an image of the real world, it is necessary for such mixed reality apparatus to acquire the viewpoint position and/or orientation of the user of the apparatus in real time. Sensors for acquiring the position and/or orientation of the user's viewpoint are widely known. In the mixed reality apparatuses, the position and/or orientation of the user's viewpoint measured by a sensor are set as the position and/or orientation of the virtual viewpoint in a virtual world. Based on the setting, an image of the virtual world is rendered by CG and synthesized with the image of the real world. As a result, the user of the mixed reality apparatus is able to view an image as if a virtual object exists in the real world.
The aforementioned sensors are categorized into optical type, magnetic type, ultrasonic type, mechanical type and so on depending on the position/orientation measurement method. In any method, sensors cannot unlimitedly measure the position and/or orientation, and have constraints in the measuring range.
For instance, in a case of an optical sensor, a light emitting unit employing a device, e.g., a light emitting diode (LED), emits light by controlling of a sensor controller, then a photoreceptive unit, e.g., a camera, a line sensor or the like, receives the light from the light emitting unit, and the viewpoint position and/or orientation of the measuring target are determined based on the received light data. Under the condition where the photoreceptive unit cannot recognize the light emitted by the light emitting unit, e.g., a condition where the light emitting unit is too distanced from the photoreceptive unit, or a condition where there is a shielding object between the light emitting unit and the photoreceptive unit, the sensor is unable to measure the position and/or orientation of the measuring target.
Furthermore, in a case of a magnetic sensor, a sensor control unit controls a transmitter to generate a magnetic field, and a receiver measures the intensity of the magnetic field generated by the transmitter. Based on the direction of the magnetic field generated by the transmitter and the intensity of the magnetic field measured by the receiver, the viewpoint position and/or orientation of the measuring target are determined. Under the condition where the receiver cannot accurately measure the magnetic field generated by the transmitter, e.g., a condition where the transmitter is too distanced from the receiver, the sensor is unable to measure the position and/or orientation of the measuring target. Even if the transmitter is not distanced from the receiver, if there is a metal or magnetic substance near the measurement space, the magnetic field generated by the transmitter is distorted, resulting in a considerable error in the measured position and/or orientation.
A general mixed reality apparatus often requires strict precision in the sensor in order to combine the real world with a virtual world without giving sense of oddness. Therefore, under such condition of measuring errors in the position/orientation measurement, the mixed reality apparatus cannot practically be used.
For the above-described reasons, when a mixed reality apparatus is employed, it is necessary to determine in advance an area where a user of the mixed reality apparatus can move around based on the range where the sensor can accurately measure the position and/or orientation of the measuring target, and to limit the movement of the user to within the area (movable area). For instance, Japanese Patent Application Laid-Open (KOKAI) No. 2002-269593 discloses a construction for performing a procedure to terminate CG rendering in a case of deviating from an effective area where the position and/or orientation can be measured by a sensor.
However, the user of the mixed reality apparatus is not always aware of the movable area when he/she is using the mixed reality apparatus. Most of the time the user of the mixed reality apparatus recognizes an abnormality after the CG rendering of the virtual world becomes incorrect because of the fact that the user unintentionally moves outside the movable area during use of the apparatus and the sensor is unable to measure the viewpoint position and/or orientation of the user. Furthermore, it is often the case that the user, who feels some kind of abnormality at this point, is unable to recognize that the abnormality is caused by deviation from the movable area. Therefore, the user of the mixed reality apparatus is unable to figure out the cause of the abnormality and how to deal with the problem, causing a problem of giving discomfort to the user.
In a case of using a magnetic sensor as the sensor, since the position/orientation measurement error becomes large near the limit of the movable area, the user of the mixed reality apparatus can predict to some extent deviation from the movable area. However, in a case of using an optical sensor, position/orientation measurement is performed with high precision even near the limit of the movable area, as long as the user stays within the movable area. But at the moment the user goes outside the movable area, the position/orientation measurement stops. In other words, to the user of the mixed reality apparatus, the image of the virtual world is suddenly disturbed with no warning, and this further gives discomfort to the user.
In order to solve the above-described problem, conventionally, the following measures have been taken to guide the user of the mixed reality apparatus so as not to go outside the movable area. The measures include: putting a mark, e.g., a tape, near the boundary of the movable area in the real world in advance, providing a physical barrier, providing a dedicated aid to guide point by point the user of the mixed reality apparatus, and so on.
However, in general mixed reality apparatuses, the real world the user of the apparatus can view has a limit in terms of image resolution and a scope of the viewing field due to factors such as the performance of an image sensing device and a display device. Furthermore, because an image of a virtual world is superimposed on an image of the real world, part of the real world image is shielded by the virtual world image. Therefore, the user of the mixed reality apparatus cannot observe all parts of the real world. In other words, even if the movable area is marked in the real world, the mark is easily overlooked. Moreover, because the virtual world image shields part of the real world image, the user of the mixed reality apparatus may not be able to view the mark. Even in a case where a physical barrier is provided, the user of the mixed reality apparatus may not be able to view the physical barrier because of the above-described reason, and in some cases, the physical barrier may put the user at risk. The conventional mixed reality apparatuses are in need of improvements in terms of above-described points.
Meanwhile, in a case of providing a dedicated aid to guide the user of the mixed reality apparatus, the above-described problems are solved. However, a dedicated aid must be allocated each time the mixed reality apparatus is used. This increases the operational trouble and burden.