Those versed in the art would appreciate that many devices exist including some devices that are configured to be controlled by a mouse, devices configured to be controlled by a touch sensitive surface, such as a touchscreen, and possibly other devices that are configured to be controlled by other input means. Touch sensitive surface enables a user to control the device by touching the surface with a touching element such as a pen, a stylus, and/or a finger. It is appreciated that the input obtained from a touch sensitive surface is different from the input obtained from a mouse or a similar pointing device in two major aspects. In one aspect, the input obtained from the mouse is continuous, and therefore the “mouse cursor” moves in a continuous mouse route on the screen. The input obtained from the touch sensitive surface, on the other hand, is not continuous, because the input is obtained only when the touching element is in contact with the surface. Therefore, the input obtained from the touch sensitive surface comprises disjoined segments corresponding to “touching periods”, separated by “no touching periods”. In another aspect, the input obtained from the mouse is relative, and indicates a movement of the “mouse cursor” relative to its current location.
The input obtained from the touch sensitive surface, on the other hand, indicates an absolute location on the touch sensitive surface. Understanding this, it can be appreciated that upon touching the touch sensitive surface in order to perform an operation attributed to a certain location on the surface, the absolute location of the touch determines the certain location to which the operation should be attributed.
However, in a mouse controllable device, no such absolute location is provided upon pressing a mouse button for performing an operation. Therefore, in order to determine the absolute location, the location of the mouse cursor should be utilized. Without the mouse cursor the absolute location cannot be is determined, and without the mouse route, the location of the mouse cursor cannot be determined.
Presently in the art there are publications that present methods for recognizing images of objects in visual data, e.g., in a first paper “Struck: Structured output tracking with kernels” by Hare, Sam, Amir Saffari, and Philip HS Torr, in IEEE International Conference on Computer Vision (ICCV), 2011, pp. 263-27 (“Hare et al”), and in a second paper “Robust object tracking via sparsity-based collaborative model” by Zhong, Wei, Huchuan Lu, and Ming-Hsuan Yang, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 1838-1845 (“Zhong et al”).
Hare et al present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, they are able to avoid the need for an intermediate classification step. Their method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, they introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. They show that the algorithm is able to outperform trackers on various benchmark videos. Additionally, they show that they can incorporate additional features and kernels into their framework, which results in increased performance.
Zhong et al propose a robust object tracking algorithm using a collaborative model. As a for object tracking is to account for drastic appearance change, they propose a robust appearance model that exploits both holistic templates and local representations. They describe a sparsity-based discriminative classifier (SDC) and a sparsity-based generative model (SGM). In the SDC module, they introduce an effective method to compute the confidence value that assigns more weights to the foreground than the background. In the SGM module, they describe a histogram-based method that takes the spatial information of each patch into consideration with an occlusion handing scheme. Furthermore, the update scheme considers both the observations and the template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem.