A technique to detect an obstacle is classified into a first one using a laser or a supersonic wave and a second one using a TV camera. In the first one, the laser can be expensive and resolution of the supersonic wave may be low. Accordingly, accuracy to detect the obstacle may become a problem. Furthermore, an active sensor using the laser or the supersonic wave cannot independently recognize a traveling lane.
On the other hand, in the second one, the TV camera is relatively cheap and suitable for obstacle detection from the viewpoint of resolution, measurement accuracy and measurement-limit. Furthermore, recognition of the traveling lane is possible. In case of using the TV camera, there is a method using one camera and another method using a plurality of cameras (stereo camera). In the method using one camera, a road area and an obstacle area are separated from an input image through the one camera according to information such as intensity, a color or a texture. For example, the road area is detected by extracting an intensity area of low brightness (gray area) from the image. Furthermore, the road area is detected by extracting an area of few textures from the image, and the other area is detected as the obstacle area. However, many obstacles having the intensity, the color or the texture similar to the road exist. Accordingly, it may be impossible to separate the obstacle area from the road area by this method.
In another method using a plurality of cameras, the obstacle area is detected according to three-dimensional information. In general, this method is called “stereo view”. In the stereo view, for example, two cameras are located at the left side and the right side of a vehicle along a moving direction on a road plane. The same point is projected onto the left image and the right image in three-dimensional space. A three-dimensional position of the same point is calculated by a triangulation method. If a position and a direction of each camera for the road plane are previously calculated, a height from the road plane of arbitrary point on the image can be detected by the stereo view. In this way, the obstacle area and the road area can be separated by the height. In the stereo view, a problem in the case using one camera can be avoided.
However, in ordinary stereo view, a problem exists when searching for the corresponding point. In general, the stereo view is a technique to detect a three-dimensional position of an arbitrary point on the image based on a coordinate system fixed to a stereo camera (The coordinate system is called a “world coordinate system”). The search for the corresponding point means a search calculation necessary to correspond the same point in space between the left image and the right image. Because the calculation cost is extremely high, a problem exists. The search for the corresponding point is a factor to prevent realization of stereo view.
On the other hand, another method to separate the obstacle area from the road area is disclosed in Japanese Patent Disclosure (Kokai) P2001-76128 and P2001-243456. In this method, a point in one of the left image and the right image is assumed to exist in the road plane. A parameter to transform the point to a projection point on the other of the left image and the right image is included. The one image is transformed by using the parameter, and the obstacle area is separated from the road area by using a difference between the transformed image and the other image.
In certain situations, such as when a vehicle passes a road including bumps, when a loading condition of persons or a carrying condition of baggage within the vehicle changes, when the vehicle vibrates, or when the road tilts, the position and direction of the cameras (i.e., tilt of the road plane based on the vehicle) changes. Additionally, in those situations, the value of the transformation parameter also changes.
In Japanese Patent Disclosure (Kokai) P2001-76128, transformation parameter of the image is calculated using two white lines drawn on the road. However, in this method, if one white line is only viewed or the two white lines become dirty, the transformation parameter cannot be calculated.
In Japanese Patent Disclosure (Kokai) P2001-243456, a plurality of suitable directions are selected from a limit of relative location between the camera and the road plane. A transformation parameter corresponding to each of the plurality of suitable directions is then calculated. One image is transformed by using the transformation parameter, the transformed image is compared with the other image, and an area of the lowest similarity is detected as the obstacle object from the image. In this method, in case of matching the transformed image with the other image as a road plane, the area of the lowest similarity is detected. Accordingly, the obstacle object is not often detected when periodical pattern is drawn on the vehicle or a road surface is reflected by rain. In this case, the transformation parameter of the road plane is necessary.
In order to accurately detect the obstacle object, a selected transformation parameter must be within an actual transformation parameter by a sufficient accuracy. Furthermore, for example, when heavy baggage is carried on the rear part of the vehicle, the body of the vehicle often leans from the front to the rear. Briefly, transformation parameter based on event of low generation possibility is also necessary. In order to satisfy the above-mentioned two conditions, many transformation parameters are necessary. As a result, the creation of many transformed images greatly costs calculation time.
Furthermore, except for a sudden vibration by a difference in the level of the road, a change of vibration of the vehicle and a change of tilt of the road plane are slow in comparison with the input interval of a TV image, and the loading condition of persons or the carrying condition of baggage does not change during the vehicle's traveling. Accordingly, the transformation parameter of the image smoothly changes under such vibration and tilt conditions. However, in above-mentioned method, the status of the current transformation parameter is not taken into consideration, and transformed images using each of all transformation parameters are respectively prepared. As a result, a problem exists with executing useless transformations.
As mentioned-above, in the prior art, it is assumed that an input image through the TV camera includes the road plane. When an image input from one camera is transformed (projected) onto an image input from another camera, the transformation parameter is calculated using the white line. However, if the white line is not included in the image, the transformation parameter cannot be calculated.
Furthermore, in a method for preparing a large number of transformation parameters and detecting an obstacle area of the lowest similarity by comparing the transformed one image with the other image, it is necessary to prepare each transformation parameter of various events of which generation possibility is low. Creation of each transformed image greatly takes calculation time. In addition to this, the current status of the vehicle is not taken into consideration in the prepared transformation parameter. Accordingly, useless transformation is largely executed.