In augmented reality (AR) applications, a real world object is imaged and displayed on a screen along with computer generated information, such as an image or textual information. In AR, the imaged real world objects are detected and tracked in order to determine the camera's position and orientation (pose) information relative to the object. This information is used to correctly render the graphical object to be displayed along with the real world object. The real world objects that are detected and tracked are generally two-dimensional (2D) objects. Detecting and tracking three-dimensional (3D) objects is algorithmically more complex and computationally very expensive compared to detecting and tracking 2D surfaces. On a desktop computer, full 3D tracking is typically performed by recognizing the entire 3D object from the object's geometry. Due to the limited processing power on mobile platforms, such as smart phones, there is no full 3D object tracking solution for mobile platforms. However, 3D object detection & tracking remains an important objective in AR applications in order to build compelling user experiences around 3D objects in the real world and not just 2D images or planes. Thus, what is needed is an improved way to detect and track 3D objects.