A camera or other image capture device can observe a three dimensional scene and project such scene onto a detector such that a series of two dimensional images are created over time. When the camera and the scene are in relative motion, the two dimensional images can change over time. The problem of tracking each point in space using such changing two dimensional images is generally known as computing the optical flow.
For example, optical flow can generally refer to a change in x-axis position and a change in y-axis position for each point within a pair of two dimensional images. An optical flow vector can describe such change in x and change in y in vector form and an optical flow field can aggregate the optical flow vectors for each point in the images. Such optical flow fields can be computed over a series of sequential images and prove useful in numerous applications, including real-time applications.
Existing methods for computing optical flow have various starting points and characteristics. For example, the Lucas-Kanade method and its derivative works use assumptions about images in grayscale. In particular, the Lucas-Kanade method assumes that optical flow is essentially constant in a local neighborhood of pixels and solves a basic optical flow equation for all pixels in the neighborhood together in a single computation. However, by assuming constant flow over a neighborhood of pixels, the Lucas-Kanade method fails to consider subpixel or single pixel changes in optical flow. Further, determining optical flow over a neighborhood of pixels using a single calculation can be computationally demanding and reduce opportunities for computational parallelizability, making such method undesirable for real-time applications. In addition, the use of grayscale images ignores color as a source of informational truth.
Other methods for computing optical flow, such as the Horn-Schunck method, include an assumption or global constraint with respect to smoothness. As a result, such methods attempt to minimize distortions in flow and prefer solutions which exhibit higher levels of smoothness. However, such assumptions and preferences with respect to smoothness inherently defeat the use of optical flow for applications such as edge detection or object segmentation.
Therefore, a system and method for computing optical flow that prioritizes local information over global information, takes advantage of available truth information from color, and is appropriate for real-time applications is desirable.