Computer vision in general and real-time object tracking in particular has numerous applications such as surveillance systems, augmented reality (AR), human computer interaction (HCI), medical imaging, and so forth. As will be appreciated, there are a number of techniques for tracking objects in real-time. Such techniques may be broadly categorized into point based tracking, kernel based tracking, and contour based tracking.
However, tracking objects in real-time using existing techniques on low-end electronic devices (e.g., embedded devices, cameras, mobile phones with low computational capability, etc.) may be quite challenging due to hardware constraints (e.g., low computational capability) of such devices. In real world applications, once the objects are tracked in a frame sequence, per frame processing such as augmenting on the tracked objects, estimating pose of the object, and so forth may further bring down the real-time performance of such devices. The impact on performance of the devices may further result in missing tracks in the frame sequences, thereby impacting performance of the tracking technique itself. In other words, existing techniques are inefficient, slow, and not robust particular on the low-end electronic devices. Moreover, existing techniques are limited because of a tradeoff between the speed of tracking and the robustness of tracking.