1. Field of the Invention
The present invention relates to vehicle detection apparatuses, and in particular, relates to a vehicle detection apparatus that detects a vehicle such as a leading vehicle based on position data obtained by a position detection unit.
2. Description of Related Art
In recent these, vehicle detection apparatuses that detect, for example, vehicles around a vehicle on which an imaging unit such as a CCD (Charge Coupled Device) camera and a CMOS (Complementary Metal Oxide Semiconductor) camera or a radar apparatus is mounted by image analysis of an image captured by the image unit or reflected wave analysis of a radio wave or laser beam emitted by the radar apparatus are under development (see, for example, Japanese Patent No. 3349060).
As a method of detecting an object including a vehicle, for example, images therearound are captured simultaneously by a pair of imaging units such as cameras and stereo matching processing or the like is performed on a pair of obtained images to calculate the distance to an object by calculating information of a parallax for each pixel. For another example, a radio wave is radiated from a radar apparatus, and a reflected wave thereof is analyzed to detect the distance to an object. Based on information of the obtained distance or the like, the object can be detected in the real space by determining the position of the object in the real space.
For example, according to the method described in Japanese Patent No. 3349060, in a scene in which, for example, an image T as shown in FIG. 18 is captured, parallax information is obtained for each pixel block of the image T by performing stereo matching processing on a pair of images including the captured image T. If the parallax information or distance information calculated therefrom is allocated to each pixel block, as shown in FIG. 19, the parallax or distance information can be represented like an image. Hereinafter, the parallax or distance information represented like an image is called a distance image Tz.
Data like an image similar to the distance image Tz shown in FIG. 19 is obtained when the distance is detected by analyzing a reflected wave of a radio wave radiated from a radar apparatus and data of the distance applied in the direction in which the distance is detected to represent the data like an image. Hereinafter, the distance image Tz includes an image obtained by representing data of a distance detected by using a radar apparatus like an image.
Then, the distance image Tz obtained in this manner is divided into, for example, segments Dn in a thin rectangular shape extending in the vertical direction with a predetermined pixel width as shown in FIG. 20 to create a histogram Hn as shown in FIG. 21 for each segment Dn for polling about information of a parallax dp and a distance Z belonging to the segment Dn. Then, for example, the class value of a class to which the mode belongs in the histogram Hn is set as a representative parallax dpn or a representative distance Zn of an object in the segment Dn. The above process is repeated for all segments Dn to calculate the representative parallax dpn or the representative distance Zn for each segment Dn.
The parallax dp and the distance Z are associated as shown below in an image analysis using a pair of imaging units such as cameras. When a point on a reference plane such as the ground directly below the center of the pair of imaging units is set as an origin, the distance direction, that is, the direction toward a point at infinity on the front side of the imaging units is set as the Z axis, and the left and right direction and the up and down direction are set as the X axis and the Y axis respectively, a point (X, Y, Z) in the real space and coordinate (i, j) of a pixel of the above parallax dp on the distance image Tz are associated in a one-to-one relationship by coordinate conversions represented as follows based on the principle of triangulation:X=CD/2+Z×PW×(I−IV)  (1)Y=CH+Z×PW×(j−JV)  (2)Z=CD/(PW×(dp−DP))  (3)where, CD is an interval between a pair of imaging units, PW is a viewing angle per pixel, CH is a mounting height of the pair of imaging units, IV and JV are an i coordinate and a j coordinate of a point at infinity on the distance image Tz on the front side respectively, and DP is a vanishing point parallax. The representative parallax dpn is associated with the representative distance Zn in a one-to-one relationship based on the above Formula (3).
Dividing the distance image Tz into segments Dn in a thin rectangular shape extending in the vertical direction as described above, corresponds, in terms of the real space to dividing an imaging region R in the real space by an imaging unit A mounted on a subject vehicle into a plurality of segmented spaces Sn extending in the up and down direction, as shown in a plan view of FIG. 22.
This also applies to a radar apparatus. Specifically, if the imaging unit A is considered as a radar apparatus and the region R is considered as a radiation region of a radio wave in the real space by the radar apparatus in FIG. 22, dividing the distance image Tz into segments Dn in a thin rectangular shape extending in the vertical direction corresponds to dividing the radiation region R in the real space by the radar apparatus A into the plurality of segmented spaces Sn extending in the up and down direction.
Then, if the representative distance Zn (the representative distance Zn associated with the calculated representative parallax dpn in a one-to-one relationship) in the segment Dn of the distance image Tz corresponding to the segmented space Sn is plotted in each segmented space Sn in the real space, each representative distance Zn is plotted, for example, as shown in FIG. 23. In actuality, many points equal to or more than that shown FIG. 23 are plotted finely depending on the number of segments Dn.
Then, as shown in FIG. 24, for example, mutually adjacent plotted representative distances Tn are grouped into groups G1, G2, G3, . . . based on the distance therebetween or directional properties (that is, whether to extend in the left and right direction (that is, the X direction) or the distance direction (that is, the Z direction)). Then, as shown in FIG. 25, an object can be detected by linearly approximating each point belonging to each group.
If, for example, a group O extending substantially in the left and right direction and a group S extending substantially in the distance direction share a corner point C, detected objects are integrated or separated by assuming that both groups belong to a same object.
When, for example, objects are detected by performing image analysis of an image captured by an imaging unit, a detection result can be visualized on the image T by, as shown in FIG. 26, surrounding each object detected based on the distance image Tz as described above by rectangular closing lines on the original image T captured by the imaging unit. In this manner, each object including a vehicle can be detected.
Furthermore, Japanese Patent Application Laid-Open No. H8-241500 proposes, as a method to avoid erroneously detecting two vehicles close to each other as one vehicle, a method of recognizing a front vehicle without using parallax or distance information but using a spatial relationship of a turn signal lamp and a stop lamp after finding a region corresponding to the turn signal lamp and stop lamp in the image because the turn signal lamp position and the stop lamp position on the rear part of a vehicle are almost equally spaced regardless of the vehicle.
However, if the method described in Japanese Patent No. 3349060 (FIGS. 18 to 26) is adopted, in a scene, for example, in which an image T as shown in FIG. 27, is captured, it is desired that a leading vehicle Vah should be detected alone as shown, for example, in FIG. 28A. However, because the leading vehicle Vah and a hedge H are captured adjacent to each other, the leading vehicle Vah and the hedge H may be put together as a group and detected as one object as shown in FIG. 28B.
For another example, in a scene in which an image T as shown in FIG. 29, is captured, effective parallax or distance information is more likely to be detected in left and right edge portions (encircled with alternate long and shorted dashed lines in FIG. 29) of a rear gate B portion of a load-carrying platform P of a flat-bodied truck with a platform, which is a leading vehicle Vah, a front wall F (also referred to as a front structure or guard frame), and a portion corresponding to a rear portion of a cab Ca (encircled with alternate long and shorted dashed lines in FIG. 29). However, almost no effective parallax or distance information is detected in a portion corresponding to the center of the rear gate B of the load-carrying platform P where the surface is flat and lacking in structure (also called texture).
Thus, if, as described above, the distance image Tz is divided into each segment Dn in a thin rectangular shape and groups are formed by calculating the representative parallax dpn and the representative distance Zn for each segment Dn, the left edge portion and the right edge portion of the rear gate B portion of the load-carrying platform P of the leading vehicle Vah may not be put together as a group as shown in FIG. 30, so that these edges are detected as separate objects away from the subject vehicle by a distance of Zb, and the front wall F and cab Ca may be detected as still another object away from the subject vehicle by a distance of Zf.
In a scene in which, for example, an image T as shown in FIG. 31, which looks like a combination of FIGS. 27 and 29, is captured, as shown in FIGS. 31 and 32, the left edge portion of the rear gate B of the load-carrying platform P of the leading vehicle Vah and the right edge portion cannot be put together as a group. Instead, the left edge portion is detected by being integrated with the hedge H.
Therefore, although the scene has the leading vehicle Vah and the hedge H captured therein, the hedge H, the front wall F and the cab Ca, and the right edge portion of the rear gate B of the load-carrying platform P may be detected as separate objects.
On the other hand, if the method described in Japanese Patent Application Laid-Open No. H8-241500 is adopted, when the subject vehicle is traveling on a multi-lane road and, for example, a vehicle of the same type as the leading vehicle Vah is traveling on the right adjacent lane, though not illustrated, the turn signal lamp and stop lamp on the right side of the leading vehicle Vah and the turn signal lamp and stop lamp on the left side of the vehicle traveling on the right lane may be detected as left and right turn signal lamps and stop lamps of one vehicle, causing a problem of reliability of detection.
Therefore, if the same object is detected as separate objects, or separate objects (or a region where there is actually no object) are detected as one object, control will be exercised based on erroneous object information, thereby increasing the danger of accidents contrariwise in automatic control of a vehicle that is supposed to contribute to safe traveling.
Hereinafter, information about the position of an object including the distance to the object obtained, as described above, based on images obtained by a pair of imaging units such as cameras or obtained by a radar apparatus is called position data.