With the application of software-based and/or hardware-based stabilization technology, jitter caused by camera movement may be minimized, making it possible to transform shaky, handheld footage into steady, smooth shots. One way to stabilize a video is to track a salient feature(s) in the image and use this as an anchor point to cancel out all perturbations relative to it. This approach requires a priori knowledge of the image's content to, for example, identify and track a person or other salient object in the scene. Another approach to image stabilization searches for a “background plane” in a video sequence, and uses its observed distortion to correct for camera motion. In yet another approach, known as “optical image stabilization,” gyroscopically controlled electromagnets shift a floating lens element orthogonally to the optical axis along the horizontal and vertical plane of the image in a direction that is opposite that of the camera movement. Doing this can effectively neutralize any sign of camera shake. In a similar type of operation, a camera's imaging sensor may be translated in the opposite direction of the camera's movements in order to dampen the effects of camera shake.
One limitation of current video stabilization techniques is that they are not capable of applying particular stabilization constraints to a particular selected image(s) in an incoming video sequence. For example, if a user desired for a particular image in an incoming video sequence to be completely stabilized (or stabilized within a maximum allowed displacement, such as +/−4 pixels in any direction), current techniques do not provide a way to ‘steer’ the stabilization trajectory of the incoming video sequence (as it is being stabilized) towards the position of the selected image(s) such that, when the video sequence is stabilized, the particular stabilization constraints may be met for the selected image(s) without presenting any visibly jarring jumps in the stabilized video sequence as the selected image(s) is reached during the playback of the stabilized version of the video sequence.
The difficulties associated with stabilizing video frames while applying particular stabilization constraints to a particular selected image(s) in a video sequence are further exacerbated in implementations wherein it is desired that only a single version of the stabilized video sequence be stored in memory and multiple images in the incoming video stream are selected to meet one or more particular stabilization constraints. When such “multiple selected image” video sequences are stabilized, the stabilized videos may look unnatural and jarring if the stabilization process does not steer the video stabilization trajectory towards the selected images in an intelligent fashion. Moreover, without keeping separate versions of the video sequence in memory (e.g., one version for each “selected” image), the steered video stabilization trajectory must be shared in such a way as to provide as best possible overall outcome, rather than steering the trajectory solely to meet the stabilization constraint for a first one of the selected images.
Thus, what is needed are techniques to modulate a video stabilization strength parameter and/or the weighting values applied to individual frames in the determination of frame stabilization motion values for a sequence of video frames based, at least in part, on one or more video frames in the sequence that have been selected to meet one or more particular stabilization constraints. Such techniques are also preferably computationally efficient.