1. Field of the Invention
The present invention relates to an image pickup apparatus for picking up an object image.
2. Description of Related Art
Conventionally, a digital still camera processes an electric signal photoelectrically converted by image pickup means to record picked up image information into, for example, an external recording medium (such as a memory card, a hard disk or the like) as electric (magnetic) information. In the processing of the electric signal, image processing is performed by treating an image as digital information, and thereby makes it possible to reproduce the image as the electric signal after photographing, which is different from a conventional film-based camera which executes photographing by printing an image on a film.
FIG. 5 shows a block diagram of principal parts of a conventional digital still camera. In the attached drawings, it is supposed that components (blocks and the like) designated by the same reference numerals in a certain figure as those in other figures have the same functions as those in the other figures.
FIG. 5 does not show components such as a user interface and the like other than ones related to digital processing of photographed image information. Also, FIG. 5 does not show the control of the camera mechanism elements which do not relate to the scope of the present invention. As an interface (I/F) with a camera control unit, a camera controller I/F 118 is shown in FIG. 5. Moreover, in FIG. 5, a reference numeral 117 denotes a coprocessor for performing calculations of automatic focusing (AF) and automatic exposure (AE). The coprocessor 117 is provided to lighten the load of a central processing unit (CPU) 116. The camera controller I/F 118 and the coprocessor 117 are shown for indicating the existence of the camera mechanism elements linking with the principal parts in the block diagram.
The conventional digital still camera converts optical information into a charge quantity by means of image pickup means such as a charge coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) sensor and the like, and stores the converted electric charges in a capacitative element. Thereby, the conventional digital still camera picks up image information as an electric signal. In FIG. 5, a reference numeral 101 denotes an imager apparatus corresponding to the CCD, the CMOS sensor and the like. The imager apparatus 101 is provided with arrangement of sensors constituting pixels respectively. The information of each pixel is transferred to a processing unit 102 at the next stage in synchronization with a timing signal generated by a timing generator 103.
The processing unit 102 is an analog signal processing unit including correlated double sampling (CDS), automatic gain control (AGC), an analog-digital converter (ADC) and the like to digitize the image information. In the stages after the ADC, the information at every pixel is treated as digital data. The digital data are transferred to the next stage in synchronization with, for example, a clock generated by the timing generator 103.
In FIG. 5, a reference numeral 104 denotes a pixel information correction unit. The pixel information correction unit 104 corrects defects and characteristics included in a signal from the sensor before performing the image processing of image information (a picture). That is, the pixel information correction unit 104 performs, for example, the removal of dark current components, the correction of shading, the correction of dot scratches, and the like. A digital signal from the sensor, subjected to the correction processing, is then subjected to the image processing of an image processing unit 107 in FIG. 5 as an image signal.
In most cases, the image processing unit 107 is provided as hardware in consideration of a processing speed. If the image processing unit 107 is supposed to be hardware, there is a case where an appropriate buffer is necessary from the points of views of filtering and holding of reference information. The configuration of FIG. 5 is provided with an image buffer 106 for storing temporarily the signals from the sensors when the throughputs of the stages after the image processing unit 107 are lowered, and a traffic controller 105 for controlling the transfer timing of the signals from the sensors.
The image buffer 106 is effective, for example, in case that the image information captured at continuous photographing is stored continuously. In a configuration in which the capturing of images and the sequence of image processing are separated from each other, there is a case where the traffic controller 105 and the image buffer 106 are positioned before the pixel information correction unit 104 shown in FIG. 5.
If the image pickup means is a sensor using a red (R), green (G) and blue (B) Bayer arrangement filter, the image processing unit 107 performs image processing as follows. That is, the image processing unit 107 generates each signal of R, G and B at each pixel position by an interpolation method, sets white balance, performs a matrix operation and gamma processing, both being one of color corrections corresponding to an output medium severally, and suppresses false colors generated at the stages up to the gamma processing. At the setting of the white balance, for example, the sensor is divided into several areas, and the integrated values of Bayer information of respective areas are compared with one another to estimate a white color position on the basis of the ratio of the integrated values. A reference numeral 108 in FIG. 5 denotes an integrator for integrating signal values in each of the areas. A reference numeral 109 in FIG. 5 denotes a coefficient value calculation algorithm for determining a white balance operation coefficient on the basis of the integrated values of the integrator 108. The coefficients are determined by calculation or by selection of a value among ones in a table.
The matrix operation, one of the color corrections, is to perform matrix operations of the R, G and B signals to change a map in a color space dependently on the signal values of respective colors. A white balance coefficient and a matrix coefficient, each being one of the color correction coefficients, affect the characteristics as the whole image. The conventional digital still camera therefore does not change the white balance coefficient and the matrix coefficient at every pixel dynamically.
The example of the digital still camera of FIG. 5 is configured to compress and record image information subjected to the image processing. In FIG. 5, a reference numeral 112 denotes a compression/decompression unit. The transfer sequence of data up to the compression/decompression unit 112 is a raster scan over the whole screen. However, there is a case where the compression/decompression unit 112 requires an input data sequence other than the raster scan. In the conventional digital still camera, a representative example of such a data sequence is the Joint Photographic Experts Group (JPEG) compression.
In the JPEG compression, data is input on the block basis called as a microcomputer unit (MCU). For realizing this data input, the data processing sequence should be changed in accordance with a raster-block conversion. In FIG. 5, a reference numeral 110 denotes a scan conversion control unit for realizing the raster-block conversion by storing data into a temporary buffer 111 temporarily in the order of the raster and by outputting the stored data to the compression/decompression unit 112 at the next stage when the data quantity stored in the temporary buffer 111 becomes sufficient to be read on the block basis.
Moreover, the compressed data stored in a recording medium 122 is decompressed by the compression/decompression unit 112 to be stored in the temporary buffer 111 temporarily. After that, the data is read from the temporary buffer 111 to be subjected to the reduction procession of a resampling unit 113 for generating a thumbnail. Then, the reduced data is stored in a work memory 114. Thus, it is possible to display the data on a liquid crystal display (LCD) monitor 124 through the CPU 116 and a monitor driver 123.
The CPU 116 in FIG. 5 performs the condition setting (not shown) of each block on the basis of control information stored in a program memory 115. The CPU 116 also performs the transfer of display data to the monitor driver 123, the control of data transfer to an I/F controller 119, and the control of data transfer to a card memory controller 121. The I/F controller 119 drives an I/F driver 120 for driving buses physically. Moreover, the card memory controller 121 executes data writing into the recording medium 122.
Recently, image pickup means has been realized in accordance with a system other than the one adopted by the conventional image pickup means mentioned above. That is, for example, it is a multi-layer photodiode type image sensor or the like.
A conventional multi-layer photodiode type image sensor is disclosed in U.S. Pat. No. 5,965,875 in detail. U.S. Pat. No. 5,965,875 discloses the principle of an image sensor of a three-layer photodiode structure, in which photodiodes are formed in a triple well structure, and pixel circuits. FIG. 6 shows a schematic diagram of a three-color pixel sensor using the three-layer structure.
According to the U.S. Pat. No. 5,965,875, the photodiodes are diffused into the surface of a p-type silicon substrate in order. An n-type layer, a p-type layer and an n-type layer are formed to be deeper in the order. Thereby, three layers of pn junction diodes are formed in the depth direction of the silicon substrate. The longer the wavelength of the light entered into the diodes from the surface side is, the deeper the light penetrates the layers.
In FIG. 6, a reference numeral 501 denotes the silicon substrate forming a p-type layer. The area denoted with a reference numeral 502 in FIG. 6 is a deeper n-type layer well area formed on the substrate 501. The depth of the deepest position of the n-type layer is within about a range from 1.5 μm to 3.0 μm from the surface. In FIG. 6, the depth of the deepest position of the n-type layer is set about 2 μm. The wavelength of red color is absorbed at the junction position to be measured as electron quantities (indicated by a reference numeral 510 in FIG. 6) of the substrate 501 and the well 502.
Since an incident wavelength and an attenuation coefficient to the wavelength are specific to silicon, it is possible to detect optical signals in different wavebands by detecting currents separately from the three-layer diodes mentioned above. The depths of the pn junctions of the three-layer photodiodes are set to cover the waveband of visible rays.
Similarly, a reference numeral 504 in FIG. 6 denotes a p-type well formed on the n-type well 502. The depth of the deepest position of the p-type well 504 from the surface is 0.6 μm in FIG. 6. The wavelength of a green color is absorbed at the junction between the p-type well 504 and the n-type well 502. The absorbed wavelength of the green color is measured as indicated by a reference numeral 512 in FIG. 6. A reference numeral 506 in FIG. 6 denotes a shallow layer n-type well area, the deepest position of which is 0.2 μm from the surface in FIG. 6. The wavelength of a blue color is absorbed at the junction between the n-type well area 506 and the p-type well 504. The absorbed wavelength of the blue color is measured as indicated by a reference numeral 514 in FIG. 6. By performing the operation processing of the three color signals, it is possible to separate the three color signals, and to reproduce an image.
Since it is possible to pick up the three color signals of R, G and B at the same pixel position in the three-layer photodiode type image sensor described above, there is no necessity of performing any color interpolation operation such as the aforesaid interpolation, and no false color owing to the interpolation operation is generated. Such advantages of the three-layer photodiode type image sensor attract attention. Although the three-layer photodiode type image sensor can detect light in different wavebands by means of the differences of depths of the three-layer photodiodes, obtained three signals overlap each other in a relatively large degree.
FIG. 7 shows a distribution of wavelengths of the light which the sensor absorbs. According to the distribution, for example, even if the peak sensitivity of the middle layer photodiode is set near to a G color (545 nm), this photodiode also photoelectrically converts an optical signal near to an R color (630 nm) and an optical signal near to a B color (450 nm) at the rate of several tens % or more. If a signal including various colors at a large rate is processed, the color reproducibility of the signal becomes deteriorated. The signal has also a defect of being easily affected by noises.
Moreover, the gain of each photodiode, i.e. a variation of a voltage of the diode to be generated by a unit charge quantity, is in inverse proportion to the pn junction capacitance C of the photodiode. Since the areas of the three diodes are inevitably different from one another and the pn junction capacitance per unit area is determined by the density of each diffusion layer, it is difficult to make the capacitance of the three photodiodes agree with one another. Consequently, three optical signals read from the three photodiodes have gains different from one another. Hence it is difficult to deal with the signals in signal operation. Moreover, the signal processing of the signals are further complicated owing to the color mixture among the signals in a large degree as described above.
Among the layers of the three photodiodes, two photodiodes adjacent to each other in the vertical direction are coupled in their capacity to each other through a pn junction. As the charges generated by photoelectric conversion are accumulated in a photodiode, the capacitance of the photodiode changes. Consequently, the electric potential of a photodiode at certain layers is influenced also by the charge quantity stored in another photodiode at other layers. Hence, there is a problem in which the linearity of the photodiode deteriorates or the linearity changes according to colors.
In addition, when the photodiode at the uppermost layer is saturated, electrons excessive in the photodiode get over the potential barrier composed of the p-type layer at the second layer from the top to flow into the n-type area of the photodiode at the lowermost layer. Consequently, if an image having an intense optical component of a short wavelength is picked up, a signal which should not exist originally is detected on the long wavelength side of the pixel, thereby deteriorating the color reproducibility of the pixel.
As described above, the multi-layer photodiode type image sensor is a sensor capable of detecting the wavelength component of each color of R, G and B at the same pixie position. However, each color component is not necessarily extracted separately from one another at the time of outputting. Moreover, there is the possibility of being extracted as a component different from the original color signal.
That is, a color mixture state among the R, G and B signals to be picked up, the influence of the variations of charge quantities owing to the influence of capacity coupling, and the influence of the outflows and the inflows which get over potential barriers differ from a point to a point in a screen according to an object to be photographed and the states of ambient light. Consequently, it is difficult to reproduce a high quality image from the outputs of a sensor having the three-layer photodiode type structure by the conventional image processing performed using coefficients determined on the frame basis.
However, as described above in connection with the block diagram in FIG. 5 exemplifying the principal parts of the configuration of a conventional apparatus, the setting of coefficients of the image processing unit 107 has conventionally been fixed in an image in a digital image processing apparatus for processing and recording as a digital signal optical information generated by image pickup means (such as a CCD, a CMOS sensor and the like).