Conventionally, in an image taken by a video camera, a digital still camera, a silver salt camera or the like, distortion has been generated owing to the influence of the distortion aberration characteristic of an imaging lens. The distortion is not conspicuous here in a high-precision high-performance lens. However, in case of using a low-priced lens or an optical zoom lens, it is difficult to avoid the influences of image distortion completely.
Accordingly, an image processing apparatus correcting the distortion by signal processing has recently been proposed. FIG. 33 shows the configuration of a conventional image processing apparatus 100. As shown in FIG. 33, the conventional image processing apparatus 100 includes a lens 200, an imaging device 300, a data converting unit 400, a signal processing unit 500, an image memory 600, a control micro computer 700, a synchronizing signal generating unit 800, a correction data table 1010, a recording unit 1100, a reproducing unit 1200 and a displaying system processing unit 1300.
Now, referring to the flow chart of FIG. 34, an outline of the operation of the image processing apparatus 100 is described. First, at Step S1, an analog image signal of a subject 101 is input through the lens 200 and the imaging device 300. Then, at Step S2, the data converting unit 400 converts the analog image signal into a digital image signal to generate an image 102.
Next, at Step S3, the signal processing unit 500 performs a correction operation to the distorted image 102 by using distortion correction vectors (hereinafter simply referred to as “correction vectors”) stored in the correction data table 1010. Then, at Step S4, the control micro computer 700 determines whether the input of images is ended or not. When the control micro computer 700 determines that the input should not be ended, the operation returns to Step S1.
The foregoing is the outline of the operation of the conventional image processing apparatus 100 shown in FIG. 33, and the contents of the operation will be described in detail in the following.
The lens 200 condenses the reflected light from the subject 101 to map the image of the subject 101 on the imaging device 300. Moreover, the imaging device 300 is formed of a CCD, a CMOS sensor or the like. The imaging device 300 captures the projected image to generate an analog image signal. Moreover, the data converting unit 400 converts the analog signal supplied from the imaging device 300 into a digital image signal to generate the image 102. On the other hand, the control micro computer 700 issues a command commanding a predetermined operation according to an input into an external user interface.
Moreover, the signal processing unit 500 stores the digital image signal generated by the data converting unit 400 into the image memory 600 in accordance with the command supplied from the control micro computer 700. Then, the signal processing unit 500 reads correction vectors corresponding to all pixels, whose correction vectors have been previously recorded in the correction data table 1010, from the table 1010. After the signal processing unit 500 has obtained necessary image signals from the image memory 600 according to the correction information, the signal processing unit 500 corrects the distortion of the image 102 output from the data converting unit 400 by executing the geometric correction of the image signals by a two-dimensional interpolation system to the image signals.
Now, the image signals generated by the signal processing unit 500 are supplied to the displaying system processing unit 1300 and the image is displayed on a monitor, or are supplied to the recording unit 1100 and recorded in an external medium 1400 such as a tape, a disc or a memory. Moreover, the image signals recorded in the medium 1400 are reproduced by the reproducing unit 1200. The reproduced signal is supplied to the displaying system processing unit 1300, and the reproduced image is displayed on the monitor.
Incidentally, the synchronizing signal generating unit 800 generates an internal synchronizing signal according to a clock signal CLK supplied from the outside and supplies the generated internal synchronizing signal to the imaging device 300, the data converting unit 400 and the signal processing unit 500.
FIG. 35 is a block diagram showing the configuration of the signal processing unit 500 shown in FIG. 33. As shown in FIG. 35, the signal processing unit 500 includes a timing control unit 510, an interpolation phase/input data coordinate calculating unit 520, a data obtaining unit 530, an interpolation coefficient generating unit 540, a data interpolation calculating unit 550, an output data buffer 560 and a data writing unit 570.
Hereupon the data writing unit 570 supplies a digital image signal supplied from the data converting unit 400 to the image memory 600 together with a writing control signal Sw and makes the image memory 600 to store the digital image signal.
Moreover, the timing control unit 510 generates a control timing signal St according to the internal synchronizing signal supplied from the synchronizing signal generating unit 800. The interpolation phase/input data coordinate calculating unit 520 calculates the coordinates of an output image according to the supplied control timing signal St and supplies a correction vector request signal Sa requesting a correction vector of the obtained coordinates to the correction data table 1010.
The correction data table 1010 obtains a correction vector in accordance with the correction vector request signal Sa from the built-in table and supplies the obtained correction vector to the data obtaining unit 530 and the interpolation coefficient generating unit 540. The data obtaining unit 530 obtains interpolation data according to the integer component of the correction vector output from the correction data table 1010 from the image memory 600 by supplying a read control signal Sr to the image memory 600. Incidentally, the data obtaining unit 530 supplies the obtained interpolation data to the data interpolation calculating unit 550.
On the other hand, the interpolation coefficient generating unit 540 generates an interpolation coefficient according to the decimal component of the correction vector supplied from the correction data table 1010 and supplies the generated interpolation coefficient to the data interpolation calculating unit 550. Then, the data interpolation calculating unit 550 executes an interpolation operation in accordance with the interpolation data supplied from the data obtaining unit 530 and the interpolation coefficient supplied from the interpolation coefficient generating unit 540. Incidentally, a two-dimensional interpolation operation is executed as the interpolation operation.
In the following, FIGS. 36A and 36B are referred to while image conversion by means of two-dimensional interpolation is described. FIG. 36A shows images before and after the two-dimensional interpolation, and FIG. 36B shows an enlarged view of a part of FIG. 36A.
Now, for example, when an arrow connecting a point a1 to a point a4 shown in FIG. 36A is an output image, it is supposed that the points on the image 102 corresponding to the points a1 to a4 constituting the output image are points A1 to A4. Consequently, FIG. 36A shows a case where an original image composed of an arrow connecting the point A1 to the point A4 is converted to the output image connecting the point a1 to point a4 by the two-dimensional interpolation.
In this case, when the image of each point of the output image is determined by using two pieces of image data in each of the x and y directions (2×2), the image data at the point a1 is determined by using, for example, four grid points K00, K01, K10 and K11 enclosing the point A1. Incidentally, the image data of the points a2 to a4 are determined by also executing similar operations to the points A2 to A4 and. Hereupon, the four grid points K00, K01, K10 and K11 are determined according to the correction coordinates output from the correction data table 1010.
Moreover, as shown in FIG. 36B, when it is supposed that both the distances between the grid point K00 and the grid point K10, and between the grid point K10 and the grid point K11 are 1, the positions of the point A1 in the x direction and the y direction are severally specified by decimal parameters Px and Py. In this case, the weighting (interpolation coefficient) Cn (n=1 to 4) of each of the image data at grid points K00, K01, K10 and K11 used for the calculation of the image data at the point a1 is determined on the basis of the decimal components, i.e. the decimal parameters Px and Py, of the correction vector supplied from the correction data table 1010.
Moreover, the data obtained as a result of the interpolation operation of the data interpolation calculating unit 550 is held in the output data buffer 560, and is output to the displaying system processing unit 1300 or the recording unit 1100 at predetermined timing.
Hereupon, the conventional data interpolation calculating unit 550 is configured as shown in FIG. 37. Incidentally, in FIG. 37, a configuration in the case where the image of each point of an output image is determined by using the image data composed of 16 in all in the state in which four pieces of the image data are severally arranged in x and y directions (4×4).
As shown in FIG. 37, the conventional data interpolation calculating unit 550 includes four line memories 900, 16 registers 901 in all, each four of which are serially-connected to the output node of each of the line memories 900, 16 multiplication circuits 902 each multiplying each image data output from each of the registers 901 by a corresponding interpolation coefficient CHn (n=00 to 33), an adding circuit 904 for adding the data obtained by the 16 multiplication circuits 902, and a dividing circuit 905 for performing the division of the data obtained by the adding circuit 904.
According to the conventional image processing apparatus described above, the distortion of an image can be corrected in real time, however, there is a problem in which the scale of the circuit becomes large and the cost of the apparatus increases because it is necessary to provide correction vectors corresponding to all pixels.
Furthermore, in the case where the position of the lens 200 is changed or in the case where an exchange of the lens is performed, it is necessary to update the correction vectors according to the change of the distortion aberration characteristic of the lens. Consequently, an expensive large capacity correction data table 1010 becomes necessary.
Moreover, the updating of the correction data table 1010 is executed by the control micro computer 700 on the basis of the instruction from the user interface. However, there is another problem in which real time processing of control micro computer 700 becomes difficult because large communication capability is required between the control micro computer 700 and the correction data table 1010.
Incidentally, there is a method of operating a correction vector sequentially in place of providing the correction data table 1010, but by such a method the real time processing without the so-called frame delay is difficult. Then, there is a problem in which large hardware becomes necessary for realizing real time processing to increase the cost.
Moreover, as described above, in the two-dimensional interpolation, the image data at a plurality of points on a two-dimensional surface on which the image is formed is used for correcting the image data of one point. However, since image data at many points becomes necessary for obtaining a high quality image, there is a problem in which the frequency of accessing the image memory 600 becomes high to make it impossible to achieve the accelerating of operation.
Moreover, in case of executing two-dimensional interpolation, it is necessary that the port width of the image memory 600 is a bandwidth being several times as large as an output rate. That is to say, for example, in the case where the image data at one pixel is generated from the image data at four pixels in two-dimensional interpolation, the port width needs to be a bandwidth four times as large as that of one pixel.
As described above, because a certain condition of the port width is necessary in case of executing the two-dimensional interpolation, it is very difficult to use a high-performance filter of a high-order tap (the “tap” means the number of pieces of data in a direction being an object of image processing), so that, there is a problem of the difficulty of obtaining a high quality image.
The present invention was made for solving the above-mentioned problems, and an object of the present invention is to provide an image processing apparatus, an image processing system and an image processing method, which are for correcting the distortion of an image at a low cost and generating a high quality image in real time.