1. Field of the Invention
The present invention relates to image pickup apparatuses, image processing methods, and computer programs, and, more particularly, to an image pickup apparatus for performing signal processing upon image pickup data obtained by a solid-state image pickup device, an image processing method, and a computer program.
2. Description of the Related Art
Typical single-plate color solid-state image pickup devices have a plurality of color filters disposed thereon. The color filter allows a specific wavelength component to pass therethrough and reach a pixel included in a solid-state image pickup device. A necessary color component is generated using a block of a plurality of pixels. A color arrangement of such color filters is, for example, as shown in FIG. 1A in which red (R), green (G), and blue (B) are used, or is, for example, as shown in FIG. 1B in which white (Y) corresponding to luminance signals, red (R), green (G), and blue (B) are used. Thus, in such single-plate color solid-state image pickup devices, each pixel obtains information about only a single color component. Accordingly, each pixel generates a necessary color component by performing interpolation using color information obtained from adjacent pixels. This process is called demosaicing.
A configuration of an image pickup apparatus provided with a single-plate color solid-state image pickup device is shown in FIG. 2. A single-plate color solid-state image pickup device 13 receives via a color filter 12 light incident from an optical lens 11, and photoelectrically converts the received light into an electric signal, and outputs the electric signal as an image signal. The image signal is converted into a digital image signal by an A/D converter (not shown). The digital image signal is subjected to clipping, gamma correction, white balance correction, and demosaicing in a camera signal processing unit 14, and is then transmitted to an image compression unit 15. The image compression unit 15 reduces the amount of data included in the image signal, converts the processed image signal into a predetermined image recording format, and then transmits the converted image data to a recording unit 16. The recording unit 16 records the received image data on a recording medium. Here, the image compression is not necessarily required to be performed. However, it is desirable that the image compression be performed so as to respond to the increasing number of pixels included in image pickup devices and the miniaturization of image pickup apparatuses.
Demosaicing performed on an image captured by a single-plate color solid-state image pickup device will be described with reference to FIG. 3. Single-plate color solid-state image pickup devices are configured to perform image capturing via primary color filters arranged using a color pattern such as the Bayer pattern (shown in FIG. 1A). In such single-plate color solid-state image pickup devices, each pixel obtains only a signal having a specific wavelength, that is, data of a color component having a specific wavelength. If a single-plate color solid-state image pickup device in which color filters are arranged in accordance with the Bayer pattern is used, the solid-state image pickup device outputs an image 20 that is a color mosaiced image in which each pixel has information about only one of R, G, and B colors.
A demosaicing unit 21 performs color interpolation for each pixel included in the image 20 so as to obtain data of all color components, that is, information about all of R, G, and B colors at each pixel.
First, a process of generating a G signal which is performed by the demosaicing unit 21 will be described. In the Bayer pattern shown in FIG. 1A, G signals are output from pixels arranged in a checkered pattern. At pixels that output no G signal in the image 20 output from the solid-state image pickup device, a G signal is generated by performing interpolation using G signals output from adjacent pixels. More specifically, a G signal (for example, G11 shown in FIG. 3) is generated using the following equation.G11=(1/4)(G01+G21+G10+G12)
Next, a process of generating R and B signals will be described. In the Bayer pattern shown in FIG. 1A, R signals are output from pixels arranged every other pixel row. Similarly, B signals are output from pixels arranged every other pixel row. For example, in the image 20 shown in FIG. 3 which has been output from the solid-state image pickup device, R signals are output from pixels in the top pixel row, but no B signals are output from pixels therein. In the second pixel row, B signals are output, but no R signals are output.
In each row where pixels that output R or B signals exist, R or B signals are output every two pixels. In the image 20 output from the solid-state image pickup device, if a pixel that outputs no R or B signal exists in a row including pixels that output R or B signals, interpolation of an R signal or a B signal (for example, R01 or B12) is performed for the pixel that outputs no R or B signal using one of the following equations. Consequently, an interpolated pixel value of the pixel is calculated.R01=(1/2)(R00+R02)B12=(1/2)(B11+B13)
Similarly, if a pixel that outputs no R or B signal exists in a column including pixels that output R or B signals, interpolation of an R signal or a B signal (for example, R10 or B21) is performed for the pixel using one of the following equations. Consequently, an interpolated pixel value of the pixel is calculated.R10=(1/2)(R00+R20)B21=(1/2)(B11+B31)
Furthermore, if a pixel that outputs no R or B signal exists in a row or column including pixels that output no R or B signals, interpolation of an R signal or B signal (for example, R11 or B22) is performed for the pixel that outputs no R or B signal using one of the following equations. Consequently, an interpolated pixel value of the pixel is calculated.R11=(1/4)(R00+R02+R20+R22)B22=(1/4)(B11+B13+B31+B33)
The demosaicing unit 21 performs the above-described color interpolation for all pixels, and outputs an R signal 22r, a G signal 22g, and a B signal 22b. The above-described color interpolation has been described by way of example, and the color interpolation may be performed in different manners using other correlations among color signals.
Currently, improvement in the quality of images captured by digital still cameras or movie cameras under low luminance conditions is an important issue. In order to capture an image under low luminance conditions, it is generally required that a shutter speed be lowered, a lens be used with a wide aperture, or an external visible light source such as a flash be used.
If a shutter speed is lowered, camera-shake occurs or a subject becomes out of focus. Furthermore, since the aperture ratio of a lens is limited, a brightness level corresponding to the limited aperture ratio is merely obtained. Still furthermore, if an external visible light source is used, the atmosphere of an image capturing location which has been created using a light disposed therein is undesirably changed.
For example, Japanese Unexamined Patent Application Publication No. 4-88784 and U.S. Pat. No. 5,323,233 disclose a signal processing method of obtaining a high-resolution image using an image pickup device provided with color filters arranged in accordance with the color pattern described previously with reference to FIG. 1B, that is, the Bayer pattern in which white (Y) corresponding to luminance signals, red (R), green (G), and blue (B) colors are used. Japanese Unexamined Patent Application Publication No. 4-88784 describes a signal processing method of achieving a high-resolution using a color filter array shown in FIG. 1B in which white pixels are arranged in a checkered pattern.
That is, according to the color filter array shown in FIG. 1B, a signal can be obtained which has a value larger than a signal value obtained when the color filter array shown in FIG. 1A, in which green (G) pixels are arranged in a checkered pattern, is used, because the white (Y) pixels arranged in a checkered pattern have sensitivity over the entire visible light range. Consequently, an image having a good signal-to-noise ratio can be obtained.
However, if the same exposure period is set for light-receiving devices corresponding to the white (Y), red (R), green (G), and blue (B) pixels, the light-receiving devices corresponding to the white (Y) pixels receive a larger amount of light compared with the light-receiving devices corresponding to the red (R), green (G), and blue (B) pixels. In such a color filter array, if the amount of light is controlled so that one type of light-receiving device can receive an appropriate amount of light, the other types of light-receiving devices cannot receive an appropriate amount of light.
Furthermore, if an exposure period suitable for one type of light-receiving device is set for the other types of light-receiving devices, the other types of light-receiving devices cannot receive an appropriate amount of light. For example, if an exposure period is set so that one type of light-receiving device does not become saturated, the other types of light-receiving devices cannot obtain sufficient signal charges. This leads to a poor signal-to-noise ratio. On the other hand, if an exposure period is set so that all types of light-receiving devices can obtain sufficient signal charges, the one type of light-receiving device becomes saturated.