1. Field of the Invention
The present invention relates to a color-mixture-ratio calculation device, a color-mixture-ratio calculation method, and an imaging device, and particularly relates to a technology of calculating ratios of color mixture with high accuracy.
2. Description of the Related Art
In progress in miniaturization of pixels of an imaging element of a digital camera, there is a problem of color mixture. The color mixture occurs after light is incident from adjacent pixels onto a silicon surface in a complementary metal oxide semiconductor (CMOS) imaging element.
As shown in FIG. 26, the color mixture depends on an incidence angle. Thus, in order to precisely correct color mixture, it is necessary to provide different parameters (ratio of color mixture) in accordance with a position within an imaging surface of a color imaging element 1, an incidence angle θ of incidence from a lens 2 in which the angle changes depending on a focal length, and an incidence angle range φ which changes depending on an aperture stop value (F number). Further, color mixture is caused by complex phenomena such as a phenomenon in which pixels causing color mixture are also affected by color mixture caused by adjacent pixels. Thus, it is difficult to measure an amount of color mixture (ratio of color mixture) caused by the adjacent pixels.
The color mixture depends on a wavelength. Thus, in particular, there is a problem of color mixture caused by red pixels through which long wavelength light as a dominant component deeply penetrates to a position under the silicon surface.
FIG. 27A is a plan view of a color imaging element having a primary-color Bayer array as a color filter array (CF array) most widely used in a single-plate-type color imaging element. FIG. 27B is a sectional view of a principal section taken along line 27b-27b of FIG. 27A.
As shown in FIG. 27B, in the color imaging element having the Bayer array, green (G) pixels have a checkered pattern shape (checker flag shape), and red (R) and blue (B) pixels are alternately arranged for each line. Hence, light incident into the R pixels penetrates to the positions under the silicon surface in the G pixels which are vertically and horizontally adjacent to the R pixels. As a result, color mixture occurs.
FIG. 28A is a plan view of a color imaging element having a new color filter array (new CF array) recently proposed by present applicants. FIG. 28B is a sectional view of a principal section taken along the line 28b-28b of FIG. 28A.
As shown in FIG. 28A, in the new CF array, basic array patterns based on units of 6×6 pixels are repeatedly arranged in the horizontal direction and the vertical direction.
[Related Art 1]
In the past, in the color imaging elements having the Bayer array, as a method for estimating an amount of color mixture, particularly, there is a method of making red light, which tends to cause color mixture since the light reaches the deepest position under the silicon surface, incident and setting an output difference between pixels, which are present at right and left positions or upper and lower positions of a red pixel, as an amount of color mixture. For example, in a case of the left position within the imaging surface shown in FIG. 29A, assuming that outputs of G pixels at the left and right of an R pixel are respectively G-LL and G-LR, an amount of color mixture is represented as (G-LL)-(G-LR). Likewise, in a case of the right position within the imaging surface shown in FIG. 29C, assuming that outputs of G pixels at the left and right of an R pixel are respectively G-RL and G-RR, an amount of color mixture is represented as (G-RR)-(G-RL).
However, at the central portion within the imaging surface shown in FIG. 29B, assuming that outputs of G pixels at the left and right of an R pixel are respectively G-CL and G-CR, an amount of color mixture is ideally represented as G-CL=G-CR. As a result, color mixture is regarded as zero for the purpose of calculation. The reason for this is that an incidence angle range φ of an incidence angle θ± the incidence angle range φ shown in FIG. 26 is not considered. In practice, at the central portion within the imaging surface (at F2.0), light is incident in an incidence angle range of 0±20°, color mixture occurs symmetrically in the vertical and horizontal directions, but this cannot be estimated in this method.
At the left position within the imaging surface, in the output G-LR of the G pixel on the right side of the R pixel, even when color mixture of light incident from the right side, or color mixture from the right side is eliminated in order from the central portion, color mixture from the opposite side is not considered.
If the incidence angle θ is large (in a case of non-telecentric wide-angle lens or the like), color mixture from the opposite side is sufficiently small. Thus, even in a conventional method, the above-mentioned problem does not appear. However, in a case of a lens through which light is vertically incident onto the entire imaging surface, correction therefor becomes insufficient.
[Related Art 2]
There is a known method for the correction. In this method, even at the central portion of the imaging surface, it is possible to perform correction in the following manner: not the output difference between the adjacent pixels on the left and right sides or the upper and lower sides but the outputs of the adjacent pixels obtained when pure red light is incident are regarded as indicating color mixture. In this method, the directions of the color mixture cannot be determined. Hence, by applying a ratio of color mixture to an average of the outputs of both of the left and right or upper and lower red pixels, calculation of color mixture from left and right or upper and lower adjacent pixels is performed. As a result, a problem arises in that a weak lowpass effect is exerted on an image in which the color mixture is corrected and this adversely affects resolution thereof.
As a method of performing the correction without adversely affecting the resolution, the following correction method based on a combination of related arts can be adopted. At the central portion of the imaging surface, the outputs of the adjacent pixels, which are obtained when pure red light is incident, are regarded as indicating color mixture, thereby calculating a ratio of color mixture. In addition, at the peripheral portions, in a manner similar to the conventional method, calculation of the ratio of color mixture is performed using the difference between the left and right sides, and linear interpolation is performed on a region between the central portion and the peripheral portion. In such a manner, it is possible to use a correction method based on a combination of related arts. However, this method is also not satisfactory since color mixture from the opposite side is regarded as zero at the peripheral portions.
JP2012-95061A describes the following technology. In an imaging element having image generation pixels and other specific pixels (white pixels or phase difference detection pixels which do not have color filters), only blue light or red light is emitted, and only light caused by color mixture is received by the image generation pixels, whereby, a color-mixture correction coefficient (ratio of color mixture) is calculated.
JP2010-16419A describes the following technology. When R single color light is emitted, response amounts (signal amounts) of G and B pixels are set as amounts of color mixture from adjacent R pixels, whereby ratios of color mixture of the R pixels to the G and B pixels are calculated. Likewise, when G single color light or B single color light is emitted, ratios of color mixture of the G pixels to the R and B pixels are calculated, or ratios of color mixture of the B pixels to the R and G pixels are calculated (in JP2010-16419A, the paragraphs “0084” to “0087”).
JP2010-16419A also describes the following technology. A color-mixture correction coefficient for each of a plurality of regions, which are divided to correspond to the imaging surface of the imaging element, is stored, a ratio of color mixture at a necessary pixel position is acquired by approximating the color-mixture correction coefficient, which is stored at the time of color mixture correction, through interpolation, and a color mixture component is corrected using the acquired ratio of color mixture.
In the description of JP2009-188461A, red light on the long wavelength side does not attenuate, and thus an amount of color mixture is large (in JP2009-188461A, the paragraph “0006”). Further, storage means stores a correction function depending on an aperture stop value or a zoom position, and a color-mixture correction coefficient is multiplied by a value calculated from the correction function, whereby color mixture correction is performed (in JP2009-188461A, claim 3).