Video magnification involves amplifying and visualizing subtle variations in image sequences. Conventional video magnification techniques are typically classified as either Lagrangian or Eulerian. In Lagrangian approaches, motions are estimated explicitly. Here, motions are defined as the subtle variations to be magnified. The Eulerian approaches, on the other hand, do not estimate motions explicitly. Rather, they estimate subtle variations by calculating non-motion compensated frame differences. Lagrangian approaches can only magnify motion changes, while Eulerian approaches can magnify motion as well as color changes. The “optical flow” technique involves feature point trajectories, which are extracted and segmented into two sets: stationary and moving. An affine motion model is fitted on the stationary points, which registers the examined sequence on a reference frame. Motions are re-estimated, scaled and added back to the registered sequence. This generates the magnified output.
The above techniques suffer from numerous drawbacks, specifically in their limitations in being able to deal with only very small motions and limited amplification factors. Only motions within a certain small range, and only amplification factors also within a limited range, can be handled; otherwise, visual artifacts are generated. These artifacts, which are typically multiplied upon generation, can take the form of intensity clipping, blurring and the like, thus destroying the magnified video. Thus, a method for dynamic video magnification solving the aforementioned problems is desired.