A description is now given of the operation of a conventional white balance gain calculation circuit used in digital cameras and the like. First, as shown in FIG. 22, the screen is divided into an arbitrary plurality of blocks (m) in advance. Then, at each block (1 through m), the pixel values are added and averaged for each color, color averages (R[i], G[i], B[i]) are calculated, and color evaluation values (Cx[i], Cy[i]) are calculated using, for example, equation (1) below:Cx[i]=(R[i]−B[i])/Y[i]×1024Cy[i]=(R[i]+B[i]−2G[i])/Y[i]×1024  (1)where Y[i]=R[i]+2G[i]+B[i], and [i] is the index number of each block.
Then, white objects are sensed under a variety of light sources in advance and color evaluation values are calculated. By doing so, in a case in which the color evaluation values calculated at each block are included in a white detection range 301 like that shown in FIG. 23 that is set in advance, that block is judged to be white and the pixel values of blocks similarly deemed white are integrated. The white detection range 301 plots color evaluation values in which white is sensed under different light conditions and calculated in advance, with the negative direction of the x coordinate (Cx) shown in FIG. 23 indicating the color evaluation value when sensing the white of a high color temperature object and the positive direction indicating the color evaluation value when sensing the white of a low color temperature object. The y coordinate (Cy) indicates the green color component degree of the light source. The greater the extent in the negative direction the greater the G component, the greater the likelihood that the light source is a fluorescent light.
Then, white balance coefficients (WBCo_R, WBCo_G, WBCo_B) are calculated from the integrated pixel values (sumR, sumG, sumB) using equation (2) below:WBCo—R=sumY×1024/sumR WBCo—G=sumY×1024/sumG WBCo—B=sumY×1024/sumB  (2)where sumY=(sumR+2×sumG+sumB)/4.
However, the following problem arises with the conventional white balance gain calculation method when taking close-up shots of a person's face as shown in FIG. 24. That is, because the color evaluation value of a complexion area sensed under sunlight (indicated by 9-1 in FIG. 23) and the color evaluation value of white objects sensed under tungsten light (shown as 9-2 in FIG. 23) are substantially identical, sometimes complexion is erroneously judged to be white under tungsten light and the complexion corrected to white.
To correct the foregoing problem, removing the face area detected by a face detection circuit from white detection (see FIG. 25) has been proposed (Japanese Patent Laid-open No. 2003-189325).
However, in such conventional white balance gain calculation method, no consideration is given in the face detection circuit to handling cases in which the face is erroneously identified or cannot be detected. As a result, if the face detection circuit erroneously detects an area that is not a face as a face, the white detection target area decreases, causing the accuracy of the output color temperature information to decline. In addition, in a case in which an area is not recognized as a person's face even though it is a face, because white detection is implemented in the face area the accuracy of the white balance correction declines as a result.
Moreover, where a close-up is taken of a person's face as shown in FIG. 24, if the face area is removed from the white detection target areas, the object areas for implementing white detection practically disappear, resulting in a decrease in white balance correction accuracy.