Object tracking, a feature commonly used in digital video workflows and applications, allows for tracking motion of an object between frames of a digital video. Object tracking may support a variety of functionality including censoring of faces, redaction of personal information, the application of overlays and clip art, and so on. To do so in conventional techniques, a boundary of the object is first defined, manually, that is then used as a basis to track motion of the object between the frames of the digital video. A user, for instance, may manually create an object mask by selecting points on various edges, contours and surfaces of the object in a frame of the digital video. Such selected points are referred to as “feature points.” Depending on the complexity of the object, definition of the feature points using conventional manual techniques may involve substantial effort and time and is prone to error. This may be especially true when the object intended to be tracked has a non-trivial shape, e.g., having a high number of complex and intricate edges.
Conventional techniques that rely on the selection and tracking of feature points, as described above, face numerous challenges that may result in inaccurate and untimely object tracking. Feature points, for instance, may deviate from and become disassociated with the object during movement between frames. This deviation causes the feature points to no longer coincide with contours and edges of the object. Feature point deviation may occur due to a variety of reasons, such as when the tracked object moves too rapidly for the tracking algorithm to keep pace with the tracked object, when background also changes along with foreground content, or during excessive camera shaking.
Other example conditions causing feature point deviation may be caused due to the similarity of coloration of the tracked object relative to the background, error buildup in applied tracking algorithms, or when the tracking algorithms are not capable of learning for the purposes of self-adjustment. For example, conventional tracking algorithms may fail to account for parameters specific to the object, and thus simply rely on object color and shape, which may cause these techniques to fail due to difficulties in differentiating the object from the background. Accordingly, conventional object tracking techniques require significant amounts of user interaction, the results of which may fail over time, and as such are both inefficient with respect to user interaction as well to resources of a computing device that employs these conventional techniques.