The development of technology related to automotive safety has shifted over the past decade from reactive systems, such as seatbelt, airbags, and occupant detection, to active systems, which include, for example, adaptive cruise control and collision avoidance. The effectiveness of the control algorithms used in these active systems may be enhanced through the use of monocular or stereovision camera systems. Compared to monocular vision systems, stereovision systems provide additional depth information, which can lead to more accurate visual detection and optical measurements.
In human vision, one key factor in analyzing 3-dimensional information is the existence of disparity between images received using the left eye and the right eye. Stereovision systems, which use two cameras to increase field of view, thereby, overlapping and improving range resolution, attempt to utilize the known principles associated with human vision. The range (R) resolution can be determined from the disparity between images according to the calculation described in Equation 1, where fx is the focal length given in pixels of the camera lens in one direction (e.g., the X direction), B is the baseline distance between the two cameras, and dx is the disparity in the specified X direction. In order to differentiate between objects at various depths, a disparity map must first be determined, which means that, for every pixel location in the left image, the corresponding pixel location in the right image must also be determined. The process for determining such a disparity map can become very computationally time consuming and expensive.
                    R        =                                            f              x                        ·            B                                d            x                                              (                  Eq          .                                          ⁢          1                )            
Due to limited image resolution, the disparity calculation will not be completely accurate even in ideal cases where the local texture of the various objects being observed allow for relatively distinct separation between the viewed images. The error in the range (R) determined from Equation 1 is proportional to R2. This error will increase significantly if the camera lenses are not parallel or any lens distortion exists. Thus conventional stereovision systems must be calibrated, which can also be both time consuming and expensive. This type of calibration must be done by a trained technician under controlled lighting conditions using a specially designed image template. After calibration, any change in the camera's position, which can be caused by a number of uncontrollable factors, such as aging, thermal effects, or even accidental physical movement, will result in the reintroduction of significant error in the range (R) calculation. The end result is the need to recalibrate the vision system. Therefore, a need exists to develop efficient and inexpensive methods that can be used to perform automatic dynamic calibration using nothing more than the natural images or scenes observed through the camera.