Various related technologies for identifying positions of a mobile object have been well known.
For example, there is a global positioning system (GPS). The global positioning system allows an in-vehicle receiver to receive radio waves transmitted from GPS satellites. Further, the global positioning system performs positioning on the basis of clock-time differences from the transmissions of the radio waves until the receptions of the radio waves.
In such positioning systems based on radio technologies, there has been a problem that it is impossible to perform positioning at spots where radio waves from a necessary number of transmitting stations cannot be received. Examples of such spots include valleys between buildings and undergrounds. In urban areas or the like, therefore, there sometimes occurs a situation where positioning is unavailable.
There has been disclosed a positioning technology which enables prevention of such a problem, and which is based on a completely different principle. In this positioning technology, the features of scenery images acquired by a camera mounted in a mobile object and the features of scenery images having been stored in advance in a database are collated by using a computational geometry technology. In the positioning technology, a present position of the mobile object is identified on the basis of the collation result.
For example, in patent literature (PTL) 1, there is disclosed a driving support system using the above-described technology. The driving support system disclosed in PTL 1 includes a camera, a car navigation device, an outside-vehicle image reproduction unit and an image comparison unit. The camera performs imaging of external images from inside a vehicle. The car navigation device calculates positions of the vehicle itself. The outside-vehicle image reproduction unit creates outside-vehicle images to be imaged by using a three-dimensional (3D) map on the basis of positions having been calculated by the car navigation device. The image comparison unit compares the images having been created by the outside-vehicle image reproduction unit with images having been imaged by the camera. A detailed position calculation unit calculates detailed positions of the vehicle itself by using the result of the comparison having been performed by the image comparison unit. In PTL 1, there is disclosed a technology which enables calculation of positions of the vehicle itself with accuracy in such a configuration as described above.
In non-patent literature (NPL) 1, the following technology is disclosed. In this technology, first, with respect to the arrangement of scale invariant feature transform (SIFT) feature points, scenery images having been acquired by a camera mounted in a mobile object are correlated with a database in which scenery image features have been stored in advance. Further, in this technology, through this correlation, image features, which are stored in the database, and which are assumed to have been imaged at a present position of the mobile object, are identified. Consequently, in this technology, it is determined from imaging position information corresponding to a scenery image correlated with these identified image features that the mobile object exists at a position indicated by the imaging position information.
Further, in NPL 2, the following technology is disclosed. In this technology, groups of feature points, which are extracted from two images having been imaged at respective two spots are correlated with each other. Further, in this technology, through this correlation, a fundamental matrix and a matrix representing relations between cameras with respective to relative positions and rotation angles, these matrixes representing correspondence relations between the two images, are calculated. In addition, since the scale of a distance cannot be defined, the relative positions represented by this matrix merely define directions.