Field of the Invention
The present invention relates to an image display system which causes an image display apparatus to receive an image from an image processing apparatus via a wireless access point and display the image, an image display apparatus, and a control method thereof.
Description of the Related Art
Recently, so-called MR (Mixed Reality) techniques are known as techniques of seamlessly combining a physical world and a virtual world in real time. A known MR technique uses a video see-through Head Mounted Display (HMD). An image capturing apparatus captures an image of an object which almost matches an object viewed from the pupil position of an HMD user. The HMD user can view an MR image obtained by superimposing a Computer Graphics (CG) image on the captured image.
FIG. 10 is a functional block diagram of a general video see-through mixed reality system which wirelessly transmits an image. An overview of the operation will be described with reference to FIG. 10.
An image display apparatus 1001 is, for example, a video see-through HMD. The image display apparatus 1001 has an image capturing unit 1004, three-dimensional position and orientation measurement sensor 1005, wireless communication I/F 1006, and display unit 1007.
The image capturing unit 1004 captures an external observation image which almost matches the line-of-sight position of the HMD user. The image capturing unit 1004 includes two sets of image capturing elements and optical systems for the right and left eyes, which generate a stereoscopic image, and a signal processing circuit, such as a Digital Signal Processing (DSP) circuit, for performing image processing of the succeeding stage.
The three-dimensional position and orientation measurement sensor 1005 measures the three-dimensional position and orientation of a measurement target to calculate the rendering position of a CG image. For example, the three-dimensional position and orientation measurement sensor 1005 acquires the three-dimensional position and orientation position (to be referred to as position and orientation information hereinafter) of the HMD user (image display apparatus 1001). The three-dimensional position and orientation measurement sensor 1005 is implemented by a magnetic sensor or a gyro sensor (acceleration and angular velocity).
The wireless communication I/F 1006 transmits or receives data to or from a wireless access point 1002a or 1002b. The wireless communication I/F 1006 transmits, for example, the image captured by the image capturing unit 1004 and the position and orientation information to the wireless access point 1002a together and receives a composited MR image. The wireless communication I/F 1006 needs to perform real-time processing and uses a high-speed wireless standard such as UWB or IEEE 802.11n capable of high-bandwidth transmission.
The display unit 1007 displays an MR image with a superimposed CG image. The display unit 1007 includes two sets of display devices and optical systems for the right and left eyes. As the display device, a small liquid crystal display or a retina scan type device utilizing MEMS is used.
Each of the wireless access points (AP) 1002a and 1002b performs information transmission with respect to the image display apparatus 1001 using wireless communication. More specifically, each of the wireless access points (AP) 1002a and 1002b transmits a display image that is a composite image with a captured image to the image display apparatus 1001, or receives output information from the three-dimensional position and orientation measurement sensor 1005 using wireless communication.
Reference numeral 1003 denotes an image processing apparatus. The image processing apparatus 1003 renders a CG image based on the captured image and the position and orientation information received from the image display apparatus 1001 and composites it with the captured image. The image processing apparatus 1003 is generally implemented by an apparatus having advanced arithmetic processing function and graphic display function, such as a personal computer or a workstation.
The image processing apparatus 1003 has a communication I/F 1008, position and orientation information generation unit 1009, CG rendering unit 1010, content storage unit 1011, and image composition unit 1012.
The communication I/F 1008 is the communication I/F on the side of the image processing apparatus 1003. The communication I/F 1008 needs to perform real-time processing and uses a metal wire such as a USB or IEEE 1394 or an optical fiber such as GigabitEthernet.
The position and orientation information generation unit 1009 generates the position and orientation information of the wearer of the image display apparatus 1001 based on the received captured image and position and orientation information. In addition to the information from the three-dimensional position and orientation measurement sensor 1005 of the image display apparatus 1001, a marker may be extracted from the captured image and used as correction information.
The CG rendering unit 1010 renders a CG image based on the position and orientation information and a content acquired from the content storage unit 1011. The content storage unit 1011 stores contents for generating a virtual image in virtual space.
The image composition unit 1012 composites the received captured image and the virtual image generated by the CG rendering unit 1010. The image composition unit 1012 transmits the obtained MR image (composite image) to the wireless access point 1002a or 1002b via the communication I/F 1008. The wireless access point 1002a or 1002b transmits the MR image to the image display apparatus 1001 by wireless communication. The MR image is thus displayed on the display unit 1007 of the image display apparatus 1001.
The above-described arrangement and processing allow any user who wears the video see-through HMD to experience a mixed reality world in which a physical world and a virtual world are seamlessly combined in real time.
Japanese Patent Application Laid-Open No. 11-88913 (FIG. 7, paragraph 0035) and Kato, H., Billinghurst, M., “Marker Tracking and HMD Calibration for a video-based Augmented Reality Conferencing System”, In Proceedings of the 2nd International Workshop on Augmented Reality (IWAR 99), San Francisco, USA, October, (1999) (hereinafter, “Marker Tracking and HMD Calibration for a video-based Augmented Reality Conferencing System”) disclose the arrangements of general MR techniques and systems.
The concept of position and orientation information generation using a marker will be described with reference to FIG. 11.
Referring to FIG. 11, the positional relationship between a marker 1103 and the image capturing apparatus is defined in advance. When the marker 1103 is displayed in a physical space image 1101, the position and orientation measurement unit detects the marker 1103 from the image data. It is possible to calculate the relative positional relationship between the marker 1103 and the image capturing apparatus main body and position and orientation information in the marker observation direction of the HMD user on the basis of information such as the size, shape, and fill pattern of the detected marker 1103.
FIG. 11 assumes a three-dimensional coordinate system (X axis 1105a, Y axis 1105b and Z axis 1105c) having its origin at the center of the marker 1103. However, the origin of the coordinate system need not always be set on the marker 1103. The origin of the coordinate system can be set at an arbitrary position by defining the relative positional relationship between it and the marker 1103. A plurality of markers may be used simultaneously for position and orientation information generation. To use a plurality of markers simultaneously, the positional relationship between the markers is defined in advance. This allows calculation of the marker observation direction based on the relative positional relationship between the markers.
It is therefore possible to use not the marker 1103 which has an internal fill pattern to identify the direction, as shown in FIG. 11, but, for example a color marker or a marker formed from a light-emitting element such as an LED which has no direction information but one-dimensional information. Not the marker 1103 but a feature point in the image such as an outline 1104 of a table 1102 or a specific color in the image may be extracted and used to calculate position and orientation information. It is also possible to generate more accurate position information by using a plurality of markers of the same type or a plurality of kinds of markers simultaneously, or combining marker information and the information of feature points in the image. The positional relationship between the plurality of markers or feature points is defined in advance. Hence, even when not all markers or feature points are displayed in the image, the position of each marker or feature point can be estimated.
“Marker Tracking and HMD Calibration for a video-based Augmented Reality Conferencing System” discloses implementation of an MR technique using markers.
In the system shown in FIG. 10, when the HMD user moves over a wide area, switching of a plurality of wireless access points is necessary. To allow for switching (handover) of the wireless access points, a plurality of antennas are provided on a mobile terminal itself. Before area switching, the mobile terminal establishes a link with the base station in the switching destination and then disconnects the preceding link so that it can always wirelessly communicate with any one of base stations. Japanese Patent Application Laid-Open No. 10-16464 (FIG. 1) (hereinafter, “JPLO 10-16464”) discloses an example of this arrangement.
JPLO 10-16464 discloses an arrangement which performs area switching in a range where the communication range of a base station of a current link overlaps that of a base station of the new link establishment target.
Japanese Patent Application Laid-Open No. 5-76078 (hereinafter, “JPLO 5-76078”) discloses a method of interpolating an unreceived image when wireless image transmission has been interrupted due to some reason.
In the arrangement disclosed in JPLO 5-76078, when infrared rays used for wireless transmission are intercepted for some reason, a signal of a still image, which is prepared in advance, is displayed.
However, the above-described prior arts have the following problems.
If an HMD having a wireless communication function moves over a wide area, the time required for handover is nonnegligible for the HMD user. Hence, images are interrupted until the completion of handover, that is, the time from disconnection of a link to establishment of a new link, or during the time from establishment of a new link to reception of a display image.
In the arrangement for always keeping at least one link established by switching a plurality of antennas, the mobile terminal must have a plurality of antennas and a plurality of circuits for communication. This leads to a bulky apparatus arrangement or circuit scale. Additionally, since a plurality of link states occur simultaneously, band assignment for communication is necessary. This limits the usable band in the whole space and limits the number of HMDs that use a single wireless access point. Furthermore, if a specific still image is displayed assuming interruption, it obstructs the HMD user's view. This poses a problem of safety in a system assuming movement.