With recent advances in technology Augmented Reality (AR) applications are increasingly common on everyday user devices such as smartphones. In AR applications, which may be real-time interactive, real images may be processed to add virtual object(s) to the image and to align the virtual object to a captured image in 3 Dimensions (3D). Typically, the virtual objects supplement real world images. Therefore, detecting and localizing objects present in a real image and determining the pose of the camera relative to the objects through image sequences facilitates accurate virtual object placement and preserves the blending of the real and virtual worlds.
When hand-held user devices are used for image capture, robust tracking methods are desirable to tolerate rapid unconstrained hand movements, which can result in tracking failure and/or poor pose estimation. While point based features are easily localized and facilitate the determination of feature correspondences between images, they are susceptible to tracking errors, which can lead to pose drift. On the other hand, line or edge based features are less susceptible to pose drift because they are stable in the face of lighting and aspect changes. However, they are susceptible to errors during feature correspondence determination, which makes robust edge-tracking challenging.
Therefore, there is a need for robust tracking methods that enhance current feature-based tracking approaches to achieve robustness and tracking accuracy for a more optimal AR experience.