1. Field of the Invention
The present invention relates to an image processing technology for processing a plurality of images of an object captured by a plurality of cameras and to a position detecting technology for detecting position of the object from image processing results.
2. Description of Related Art
Heretofore, there has been known a technology of picking out three-dimensional features of an object such as an obstacle by processing its images captured by using a type of lens corresponding to a projection plane in a field of motion stereo vision of gaining a stereoscopic sense from a motion parallax generated when a mobile object such as a robot moves as disclosed in Japanese Patent Laid-open No. Hei. 5-165957 (paragraphs [0060] and [0061] and FIG. 14) for example. JP Hei. 5-165957A discloses an image processing method that assimilates a retinal area (receptive field) of animals to reduce a processing time by cutting up an input image and conducting polarity conversion on a spherical surface (receptive field method). This receptive field method prescribes three types of projection of spherical plane projection, cylindrical plane projection and plain plane projection.
Among them, the spherical plane projection is a projection method of projecting an image on a spherical surface and provides a broadest visual field and an image equivalent to the projected image by a fish-eye lens. The cylindrical plane is a projection method of projecting an image on a cylindrical plane and provides a broad visual field in an angular direction of the cylinder even though a visual field in an axial direction thereof is limited. This method provides an image equivalent to the projected image by a cylindrical lens. The plain plane projection is a projection method of projecting an image plainly and provides a narrowest visual field. This method provides an image equivalent to the projected image by a standard and/or telephotographic lens. It is noted that the image processing method described in JP Hei. 5-165957A relates to the motion stereo vision and is not what detects position of an object accurately from the stereo image.
Japanese Patent Laid-open No. 2004-309318 (paragraphs [0060] and [0077], FIG. 7) describes a technology of detecting position of an object by processing stereo images obtained by using two cameras. A calibration information generating apparatus and a position detecting apparatus described in JP 2004-309318A operate as follows. That is, the calibration information generating apparatus of JP 2004-309318A measures calibration data per pixel captured by each camera to generate calibration information (calibration table) per camera. Then, the position detecting apparatus detects the position of the object by removing distortions of the captured images based on the stereo images obtained by using the two right and left cameras and the calibration information (calibration tables). Here, it is assumed to use a wide-angle lens such as a fish-eye lens and others as lenses of the cameras.
It is noted that the stereo images obtained by using the two right and left cameras are transformed into screen coordinates so that center projection is attained while performing distortion correction on a plane where right and left epi-polar lines coincide. When a transverse (horizontal) direction of an incident ray of light (optical axis) is denoted by α and a vertical (perpendicular) direction thereof by γ, position (u, v) thereof on a screen may be represented by Equations 1a and 1b as follows. Where, k1 and k2 are predetermined constants.u=k1tanα  Eq. 1av=k2tanγ  Eq. 1b
However, the prior art image processing method by way of the stereo vision has had the following problems when an input image is an image captured by a fish-eye lens for example.
That is, a center part (low-angle part) of the image where image resolution is relatively high is compressed when the image is projected on a plain plane by way of the center projection in utilizing a high-angle part of the image close to 90 degrees of incident angle in the vertical (perpendicular) direction for example in the prior art image processing method. Due to that, if the image at the center part (low-angle part) is used to measure the position in the image whose center part is compressed as described above, precision of the measurement inevitably drops. If the center part of the image is not compressed in contrary, the prior art method incurs waste in terms of a memory amount to be used and of a calculation time because an information amount of the low-angle image at the center part is larger than that of the image at the high-angle part and a peripheral image (at the high-angle part) where the resolution is low is complemented in the same manner when the complementation is carried while adjusting to the resolution of the center low-angle image.
Accordingly, the present invention seeks to solve the aforementioned problems and to provide an image processing technology allowing a memory amount to be used and a processing time to be reduced in processing images captured in wide-angle by way of the stereo vision.