Various image processing techniques are available to find depths of a scene in an environment using image capture devices. The depth data may be used, for example, to control augmented reality, robotics, natural user interface technology, gaming and other applications.
Stereo matching is a process in which two images (a stereo image pair) of a scene taken from slightly different viewpoints are matched to find disparities (differences in position) of image elements which depict the same scene element. The disparities provide information about the relative distance of the scene elements from the camera. Stereo matching enables distances (e.g., depths of surfaces of objects of a scene) to be determined. A stereo camera including, for example, two image capture devices separated from one another by a known distance can be used to capture the stereo image pair. In some imaging systems, the scene is illuminated with a structured pattern, for example, of dots, lines or other pattern.
In general, there is a trade-off between accuracy of results and the speed and resources needed to make the depth or distance calculations. Thus, for example, in some cases, one or more pixels in the image capture devices may be assigned incorrect disparity values. Further, in some instances, many pixels may not be assigned a disparity value at all, such that the resulting disparity map (or subsequently computed distance map) is sparsely populated. A sparse disparity map can result, for example, from a low-textured scene or a sparse projected light pattern. Although global optimization algorithms and other algorithms can produce full disparity maps and can alleviate the foregoing problems, they tend to require more computational resources (e.g., they are generally slower and consume more power). Since these algorithms require more computational resources (e.g., computational time) these techniques are, therefore, less suited for real-time (e.g., about 30 frames per second) or near real-time (e.g., about 5 frames per second) applications.