When a single plate-type solid-state image sensor is used as an image sensor of an imaging apparatus, only a single spectral sensitivity is obtained. Therefore, generally, color imaging is performed by arranging color filters of different colors such as R, G, and B on image sensors corresponding to respective pixels. In this method, only one color (for example, any one of R, G, and B) is obtained with each pixel. Accordingly, a mosaic-like image based on color is generated.
Specifically, only color information, R or G or B or the like, is acquired for each pixel according to the pattern of filters. This image is called a so-called mosaic image. In order to obtain a color image from the mosaic image, it is necessary to obtain the color information of every color for each of all the pixels.
A color image can be generated by calculating color information of all colors (for example, all the RGB) corresponding to each of all pixels by interpolating color information obtained from surrounding pixels of each pixel. This interpolation processing is called demosaic processing.
For example, an example of the color filters used for an imaging apparatus is illustrated in FIG. 1(1) This array is called Bayer pattern, and transmits light (R, G, or B) having a specific wavelength component in units of a pixel. In the Bayer pattern, a minimum unit consists of four pixels which include two filters to transmit a green (G), one filter to transmit a blue (B), and one filter to transmit a red (R).
With the miniaturization of the image sensor, the sensor becomes easily affected by a minute difference in pixel structure. Therefore, it becomes apparent that even the pixels having the same spectral, characteristic (for example, G pixels in the Bayer pattern) are different in sensitivity from pixel to pixel due to the slight difference in the structure.
For example, as illustrated in FIG. 1(2), the G pixels include G pixels in an R line (hereinafter, referred to as Gr pixels) and G pixels in a B line (hereinafter, referred to as Gb pixels). Although the Gr pixels have G filters having the same spectral characteristic as those of the Gb pixels, there might be sensitivity differences because of the slight structural differences.
When the above-mentioned demosaic processing is performed on an image imaged by the image sensors having the sensitivity differences, a portion having small differences in brightness, which may be originally determined to be a flat portion, is erroneously determined to be an edge portion due to the difference in DC component between the Gb pixels and the Gr pixels. As a result, an error occurs in selecting surrounding pixels used to determine a pixel value of a specific pixel, so that a plurality of interpolation values is mixed irregularly. This is likely to result in generation of an artifact that stands out very much. Therefore, it is necessary to perform correction processing on the sensitivity difference before the demosaic processing is performed. The demosaic processing is described, for example, in Patent Document 1 (Japanese Patent No. 2931520).
When the sensitivity differences have the same tendency over the entire screen, the correction may be performed by adjusting the level and/or the offset. That is, since the correction is performed such that the sensitivity of the G pixels in a R line (Gr pixels) matches the sensitivity of the G pixels in a B line (Gb pixels), the following can be estimated with use of coefficients A and B.[Formula 1] Gb=Gr×A+B  (Expression 1)
In the above expression (Expression 1), data that shows a bar above Gb indicates a pixel value obtained by correcting the sensitivity of the G pixels in an R line (Gr pixels) so as to match the sensitivity of the Gb pixels. The symbol “  ” (bar) written above Gb or the like in expressions is written in the form of Gb(  ) in the specification.
When the sensitivity differences have the same tendency over the entire screen, it is effective to use a correction value obtained by using the above expression (Expression 1). However, causes of generation of the sensitivity differences at the positions of respective pixels include various factors such as a pixel structure and an angle of incident light. Therefore, the sensitivity varies from pixel to pixel (for example, an upper side and a lower side of a screen). Moreover, even the same pixel changes in sensitivity due to the influence of the aperture of a lens or the like.
A method of absorbing the level difference according to the pixel position is also proposed. The method measures the difference in the sensitivity of each area, and absorbs the sensitivity difference by performing the correction processing based on the gain and the offset. For example, when a horizontal distance from the center is assumed to be x and a vertical distance is assumed to be y, a correction coefficient for each pixel can be approximately calculated by using a correct ion function f(x, y) and a correction function g(x, y) calculated from the sensitivity difference of each area, and the correction can be performed as follows.[Formula 2] Gb=Gr×f(x,y)+g(x,y)  (Expression 2)
However, this method achieves only a rough correction for each area. Accordingly, the sensitivity difference between fine areas cannot be absorbed. In addition, since the sensitivity also depends on optical characteristics such as an aperture and a zoom state of a lens, a great deal of labor and time is required to measure the f(x, y) and/or the g(x, y).
There is also a technique to absorb the sensitivity difference by using only information on adjacent pixels. In Patent Document 2 (Japanese Patent Application Laid-open (JP-A) No. 2005-160044), image processing is performed when performing the demosaic processing on an image of four colors obtained by using a color filter for transmitting an emerald (E) in addition to filters for transmitting R, G, and B as illustrated in FIG. 1(3) by using the fact that spectral characteristics of color filters of G and E are similar to each other. By estimating E pixels at the positions of G pixels and estimating G pixels at the positions of E pixels, an image illustrated in FIG. 1(4) can be produced.
For the image arrayed as illustrated in FIG. 1(4), it becomes possible to perform demosaic processing similar to the demosaic processing which is applied to the Bayer pattern (FIG. 1(1)). However, although the spectral characteristics and/or the sensitivities of the G filter and the E filter are different as illustrated in FIG. 2, since the spectral characteristics thereof partially overlap, there is a strong correlation between the G pixel and the E pixel. Accordingly, estimation of regression analysis can be used to estimate the E pixels at the positions of the G pixels or to estimate the G pixels at the positions of the E pixels.
The technique which estimates G pixels at the positions of E pixels is illustrated as an example. The weighted mean mE and mG of adjacent E pixels is calculated as follows.
                    [                  Formula          ⁢                                          ⁢          3                ]                                                            mE        =                                            ∑              i                        ⁢                                                  ⁢                          (                                                E                  i                                ×                                  C                  i                                            )                                                          ∑              i                        ⁢                                                  ⁢                          C              i                                                          (                  Expression          ⁢                                          ⁢          3                )                                          m          ⁢                                          ⁢          G                =                                            ∑              j                        ⁢                                                  ⁢                          (                                                G                  j                                ×                                  C                  j                                            )                                                          ∑              j                        ⁢                                                  ⁢                          C              j                                                          (                  Expression          ⁢                                          ⁢          4                )            
In the above expressions (Expression 3) and (Expression 4), i represents a pixel number of a certain surrounding pixel, Ei represents a pixel value of an E pixel corresponding to the number, and Ci represents a weighting factor corresponding to the distance from a center pixel, j represents a pixel number of another certain surrounding pixel, Gj represents a pixel value of a G pixel corresponding to the number, and Cj represents a weighting factor corresponding to the distance from the center pixel.
Dispersion VGG of the adjacent G pixels and covariance VEG of the G pixels and the E pixels are calculated considering the difference in the spectral characteristic between the E pixel and the G pixel illustrated in FIG. 2, and an estimation value of the E pixel is estimated as follows.
                    [                  Formula          ⁢                                          ⁢          4                ]                                                                      E          _                =                                                            V                EG                                            V                GG                                      ×                          (                              G                -                                  m                  ⁢                                                                          ⁢                  G                                            )                                +          mE                                    (                  Expression          ⁢                                          ⁢          5                )            
In the above-mentioned (Expression 5), it is necessary to perform calculations of the dispersion and the covariance, and the calculation amount of these operations is very large. Accordingly, in some cases, such estimation is practically performed by using an operation lighter than (Expression 5) as described below.
                    [                  Formula          ⁢                                          ⁢          5                ]                                                                      E          _                =                              G                          m              ⁢                                                          ⁢              G                                ×          mE                                    (                  Expression          ⁢                                          ⁢          6                )            
However, it is also understood that this expression (Expression 6) requires multiplication and division operations. Furthermore, it is understood that, when it is achieved with a circuit, it costs a lot.
The technique disclosed in Patent Document 2 (JP-A No. 2005-160044) can be used not only to perform processing on R, G, B, and E illustrated in FIG. 1(4) but also to perform correction by estimating the sensitivity difference between the Gb pixel and the Gr pixel in the Bayer pattern illustrated in FIG. 1(2). However, a great amount of calculations is necessary to calculate the dispersion and the covariance in the above-mentioned (Expression 5), and an amount of calculation is also large in a simpler expression (Expression 6) because it includes divisions.
The spectral characteristics of the G filter and the E filter are different as illustrated in FIG. 2 in the case of the array of R, B, and E illustrated in FIG. 1(4). However, in the Bayer pattern illustrated in FIG. 1(2), although the filter characteristics of the Gb filter and the Gr filter are affected by color mixture, pixel structure, and incident light, the filter characteristics are very similar to each other as illustrated in FIG. 3. Therefore, it is anticipated that there is a correlation between them which is stronger than the correlation between the G pixel and the E pixel, and the correction can be achieved with a smaller amount of operations.