1. Field of the Invention
The present invention relates generally to tracking systems used in conjunction with augmented reality applications.
2. Description of Related Art
Augmented reality (AR) systems are used to display virtual objects in combination with a real environment. AR systems have a wide range of applications, including special effects for movies, display of medical data, and training using simulation environments. In order to effectively achieve the illusion of inserting a virtual object into a real environment, a user's viewpoint (hereinafter “camera pose”) within the real environment, which will change as the user moves about within the real environment, must be accurately tracked as it changes.
Generally, a camera pose within a real environment can be initialized by utilizing pre-calibration of real objects within the environment. By pre-calibrating the position of certain objects or features within the real environment and analyzing the image generated by the initial perspective of the camera pose, the parameters of the initial camera pose can be calculated. The camera pose is thereby initialized. Subsequently, a camera's moving viewpoint must be tracked as it changes within the real environment, so that virtual objects can be combined with the real environment appropriately and realistically, according to the camera's viewpoint in any given frame. This type of tracking is termed “object-centric tracking,” in that it utilizes objects within the real environment to track the changing camera pose. Effectiveness of AR systems depends at least in part upon alignment and annotation of real objects within the environment.
Various types of object-centric tracking systems for use with augmented reality systems have been utilized in the past. For example, self-tracking systems using point features that exist on objects within a real environment have been used to track camera pose within the environment (R. Azuma, “Survey of Augmented Reality.” Presence: Teleoperators and Virtual Enviornments 6 (4), 355–385 (August 1997); U. Neumann and Y. Cho, “A Self-Tracking Augmented Reality System.” Proceedings of ACM Virtual Reality SOftware and Technology, 109–115 (July 1996); J. Park, B. Jiang, and U. Neumann, “Vision-based Pose Computation: Robust and Accurate Augmented Reality Tracking.” Proceedings of International Workshop on Augmented Reality (IWAR)'99 (October 1999); A. State, G. Hiorta, D. Chen, B. Garrett, and M. Livington, “Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking.” Proceedings of SIGGRAPH '96; G. Welch and G. Bishop, “SCAAT: Incremental Tracking with Incomplete Information.” Proceedings of SIGGRAPH '96, 429–438 (August 1996)). These and other similar systems require prepared environments in which the system operator can place and calibrate artificial landmarks. The known features of the pre-calibrated landmarks are then used to track the changing camera poses. Unfortunately, such pre-calibrated point feature tracking methods are limited to use within environments in which the pre-calibrated landmarks are visible. Should the camera pose stray from a portion of the environment in which the pre-calibrated point features are visible, the tracking method degrades in accuracy, eventually ceasing to function. Therefore, such systems have limited range and usefulness.
Other tracking methods have been utilized for the purpose of reducing the dependence on visible landmarks, thus expanding the tracking range within the environment, by auto-calibrating unknown point features in the environment (U. Neumann and J. Park, “Extendible Object-Centric Tracking for Augmented Reality.” Proceedings of IEEE Virtual Reality Annual International Symposium 1998, 148–155 (March 1998); B. Jiang, S. You and U. Neumann, “Camera Tracking for Augmented Reality Media,” Proceedings of IEEE International Conference on Multimedia and Expo 2000, 1637–1640, 30 Jul. –2 Aug. 2000, New York, N.Y.). These and similar tracking methods use “auto-calibration,” which involves the ability of the tracking system to dynamically calibrate previously un-calibrated features by sensing and integrating the new features into its tracking database as it tracks the changing camera pose. Because the tracking database is initialized with only the pre-calibration data, the database growth effect of such point feature auto-calibration tracking methods serves to extend the tracking region semi-automatically. Such point feature auto-calibration techniques effectively extended the tracking range from a small prepared area occupied by pre-calibrated landmarks to a larger, unprepared area where none of the pre-calibrated landmarks are in the user's view. Unfortunately, however, such tracking methods rely only on point features within the environment. These methods are therefore ineffective for environments that lack distinguishing point features, or for environments in which the location coordinates of visible point features are unknown. Moreover, these methods for recovering camera poses and structures of objects within the scene produce relative camera poses rather than absolute camera poses, which are not suitable for some augmented reality applications.
Still other tracking methods utilize pre-calibrated line features within an environment (R. Kumar and A. Hanson, “Robust Methods for Estimating Pose and a Sensitivity Analysis,” CVGIP: Image Understanding, Vol. 60, No. 3, November, 313–342, 1994). Line features provide more information than point features and can therefore be tracked more reliably than point features. Line features are also useful for tracking purposes in environments having no point features or unknown point features. However, the mathematical definition for a line is much more complex than that of a simple point feature. Because of the mathematical complexities associated with defining lines, line features have not been suitable for auto-calibration techniques the way that mathematically less-complex point features have been. Therefore, tracking methods utilizing line features have been dependent upon the visibility of pre-calibrated landmarks, and inherently have a limited environment range. Line feature tracking methods have therefore not been suitable for larger environments in which line features are unknown and un-calibrated, and in which pre-calibrated line features are not visible.