Over the past few years there have been several advances in the area of computer vision. One area that has seen significant advances is stereo matching. In the past, real-time stereo matching techniques required special purpose hardware. Now, there are stereo matching techniques that can be implemented on regular personal computers.
In overview, stereo matching involves determining a disparity between one or more views of the same scene obtained from different viewpoints (e.g., left-eye viewpoint and right-eye viewpoint). When there are only two viewpoints, stereo matching is referred to as two-frame stereo matching. In general, disparity refers to the difference in location of corresponding features in the scene as seen by the different viewpoints. The most common type of disparity is horizontal disparity, but vertical disparity is possible if the eyes are verged.
When determining the disparity, stereo matching techniques must handle various problems, such as noise, texture-less regions, depth discontinuity, and occlusion. For example, the stereo matching technique needs to handle the noise caused by unavoidable light variations, image blurring, and sensor noise during image formation. In addition, the techniques need to handle object boundaries and occluded pixels (i.e., pixels only seen in one of the views).
Even with all the advances in stereo matching techniques, there is a continual need for more accurate and efficient stereo matching techniques.