Heretofore research for producing three-dimensional images has been made. As one method of producing a three-dimensional image, the method of displaying two images of an object photographed from different directions side by side, and showing these two images to a viewer's left eye and right eye respectively, has been known. A pair of two images used in this method is referred to as a stereoscopic image.
Two images that are contained in a stereoscopic image are to be viewed by the viewer's left eye and right eye separately, and therefore, to produce a three-dimensional image of good quality, these two images in which the object is captured are preferably photographed under the same conditions as when the viewer normally sees things. However, there are cases where, for example, the camera module for photographing the left-eye image and the camera module for photographing the right-eye image are placed aside from their adequate positions, within a range of mounting tolerances for camera modules. As a result of this, the object captured in these two images may be set aside from their adequate positions in a vertical direction or in a horizontal direction, or the object captured in one image might rotate on the image, against the object captured in the other image. In such cases, to generate a stereoscopic image which can produce a good three-dimensional image, a calibration process is executed to determine a set of correction parameters which sets forth the projective transformation for correcting the position of the object captured in at least one image.
Heretofore, in the calibration process, the user has to adjust the position of the object of the left-eye image or the position of the object of the right-eye image while looking with his/her eyes, which are very inconvenient for the user. In particular, if the user is a beginner, it is difficult to perform the calibration process adequately.
Meanwhile, a technique of automatically adjusting, based on each object on two images, the position of each object on the two images, has been proposed. This technique finds a plurality of pairs of feature points on each image corresponding to the same points on the object, and determines a projective transformation matrix for adjusting the position of the object on each image based on these feature points. Then, according to the projective transformation matrix, the positions of pixels in one image are transformed (see, for example, Mikio Takagi, Haruhisa Shimoda, Handbook of Image Analysis, University of Tokyo Press, 1991, pp. 584-585). However, if pairs of feature points are maldistributed on an image, the deviation of the object on two images in an area where there are no feature points is not reflected in the projective transformation matrix, and therefore the object may not be positioned accurately over the entire image. Then, a technique of making adjustments such that pairs of feature points are not maldistributed on an image, is proposed (see, for example, Japanese Laid-open Patent Publication No. 2010-14450 and Japanese Laid-open Patent Publication No. 2004-235934). For example, Japanese Laid-open Patent Publication No. 2010-14450 discloses a technique of detecting feature points in an area where the density of feature points extracted from each image is low, and using these detected feature points to position the two images.
Also, Japanese Laid-open Patent Publication No. 2004-235934 proposes a technique of extracting, from image data of a plurality of cameras having photographed a moving sphere from different viewpoint directions, the center position of the sphere in each camera's corresponding frames, as feature points, and calculating the calibration parameters based on the extracted feature points. With this technique, when feature points are maldistributed, the sphere is moved and images are acquired again, and feature points are extracted. Also, this technique eventually utilizes calibration parameters that are calculated when feature points are decided not to be present.