Optical flow is a concept which approximates the motion of objects within a visual representation. Optical flow is the velocity field which warps one image into another (usually very similar) image. Optical flow techniques are based on the idea that the same physical point on an object in the scene is captured by the camera in corresponding points in the two images preserving certain image properties such as brightness, the gradient vector, etc.
Optical flow computations are central to many image processing applications that deal with groups of similar images. For example, image sequence compression algorithms commonly use optical flow parameters to represent images compactly in terms of changes relative to preceding or succeeding images in the sequence. Optical flow parameters are also used in three-dimensional reconstruction by stereo matching of pixels in a group of images taken of an object from different angles, or by tracking the motion of rigid objects in a scene, as well as in image resolution enhancement. In addition, variations in optical flow over the area of an image may be used in image segmentation and in tracking the motion of an object across a sequence of images.
Despite much research effort invested in addressing optical flow computation it remains a challenging task in the field of computer vision. It is a necessary step in various applications like stereo matching, video compression, object tracking, depth reconstruction and motion based segmentation. Hence, many approaches have been proposed for optical flow computation. Most methods assume brightness constancy and introduce additional assumptions on the optical flow in order to deal with the inherent aperture problem. Lucas and Kanade (1981) tackled the aperture problem by solving for the parameters of a constant motion model over image patches. Subsequently, Irani et al. (1993, 1997) used motion models in a region in conjunction with Lucas-Kanade in order to recover the camera ego-motion. Spline based motion models were suggested by Szeliski and Coughlan (1997).
Horn and Schunck (1981) sought to recover smooth flow fields and were the first to use functional minimization for solving optical flow problems employing mathematical tools from calculus of variations. Their pioneering work put forth the basic idea for solving dense optical flow fields over the whole image by introducing a quality functional with two terms: a data term penalizing for deviations from the brightness constancy equation, and a smoothness term penalizing for variations in the flow field. Several important improvements have been proposed following their work. Nagel (1990, 1986) proposed an oriented smoothness term that penalizes anisotropically for variations in the flow field according to the direction of the intensity gradients. Ari and Sochen (2006) recently used a functional with two alignment terms composed of the flow and image gradients. Replacing quadratic penalty terms by robust statistics integral measures was proposed in (Black and Anandan 1996; Deriche et al. 1995) in order to allow sharp discontinuities in the optical flow solution along motion boundaries. Extensions to multi-frame formulations of the initial two-frames formulation allowed the consideration of spatiotemporal smoothness to replace the original spatial smoothness term (Black and Anandan 1991; Farnebäck 2001; Nagel 1990; Weickert and Schnörr 2001). Brox et al. (2004, 2006) demonstrated the importance of using the exact brightness constancy equation instead of its linearized version and added a gradient constancy to the data term which may be important if the scene illumination changes in time. Cremers and Soatto (2005) proposed a motion competition algorithm for variational motion segmentation and parametric motion estimation. Amiaz and Kiryati (2005) followed by Brox et al. (2006) introduced a variational approach for joint optical flow computation and motion segmentation. In Farneback (2000, 2001), a constant and affine motion model is employed. The motion model is assumed to act on a region, and optic flow based segmentation is performed by a region growing algorithm. In a classical contribution to structure from motion Adiv (1985) used optical flow in order to determine motion and structure of several rigid objects moving in the scene. Sekkati and Mitiche (2006) used joint segmentation and optical flow estimation in conjunction with a single rigid motion in each segmented region. Vázquez et al. (2006) used joint multi-region segmentation with high order DCT basis functions representing the optical flow in each segmented region.