1. Field of the Invention
The present invention relates generally to augmented reality, and more particularly, to a system and method for measuring the registration accuracy of an augmented reality system, i.e., for measuring how well computer generated graphics align to real-world objects or images of real world objects in the augmented reality system.
2. Description of the Related Art
Augmented reality is the technology in which a user's view of the real world is enhanced with additional information generated from a computer model, i.e., the virtual. The enhancements may include labels, 3D rendered models, or shading and illumination changes. Augmented reality allows a user to work with and examine the physical world, while receiving additional information about the objects in it. Some target application areas of augmented reality include computer-aided surgery, repair and maintenance, facilities modification, and interior design.
In a typical augmented reality system, the view of a real scene is augmented by superimposing computer-generated graphics on this view such that the generated graphics are properly aligned with real-world objects as needed by the application. The graphics are generated from geometric models of both virtual objects and real objects in the environment. In order for the graphics and video of the real world to align properly, the pose and optical properties of real and virtual cameras of the augmented reality system must be the same. The position and orientation (pose) of the real and virtual objects in some world coordinate system must also be known. The locations of the geometric models and virtual cameras within the augmented environment may be modified by moving its real counterpart. This is accomplished by tracking the location of the real objects and using this information to update the corresponding transformations within the virtual world. This tracking capability may also be used to manipulate purely virtual objects, ones with no real counterpart, and to locate real objects in the environment. Once these capabilities have been brought together, real objects and computer-generated graphics may be blended together, thus augmenting a dynamic real scene with information stored and processed on a computer.
In order for augmented reality to be effective, the real and virtual objects must be accurately positioned relative to each other, i.e., registered, and properties of certain devices must be accurately specified. This implies that certain measurements or calibrations need to be made. These calibrations involve measuring the pose, i.e., the position and orientation, of various components such as trackers, cameras, etc. What needs to be calibrated in an augmented reality system and how easy or difficult it is to accomplish this depends on the architecture of the particular system and what types of components are used.
A well-calibrated optical see-through system should be able to accurately register the virtual and the real in the user's view. However, errors in calibration and tracking or pose estimation may cause incorrect superimposition of the virtual and the real. For example, the virtual objects may appear to “lag behind” their real counterpart as the user moves around or the virtual objects may appear to “swim around” the real objects, instead of staying registered with them. An objective assessment of these errors needs to be made and some sort of error metric has to be established.
Objectively assessing the registration error has not been completely addressed within the augmented reality community. There is no good objective method around that can measure how good the alignment of the virtual and real in the user's eyes is. There has been two main approaches to the problem: 1) Relying on the user to report the qualitative accuracy of the alignment (possibly as “acceptable”, “good” or “not acceptable”) or 2) Using a camera to replace the human eye and conducting image-based measurements to assess the accuracy of the alignment. The former method is quick and easy, but it does not give a quantitative measure and, more importantly, it is subjective. The latter method, which is based on replacing the human eye with a camera, does not guarantee that the measured accuracy will be observed by the user since the camera is only an approximation of the human eye. Furthermore, the latter method is very tedious process that cannot be repeated as the need arises and must be performed offline since the camera is mounted where the user's head would normally go.