FIG. 13 is a block diagram illustrating the structure of a single CCD type digital camera as one example of a conventional image sensing apparatus according. In FIG. 13, a solid-state image sensing device 1 such as a CCD has its surface covered by an RGB color filter of, e.g., a Bayer-type array, which enables color image sensing. The optical image of an object that impinges upon the image sensing device 1 via a lens (not shown) is converted to an electric signal by the image sensing device 1. In order to eliminate noise from the electric signal obtained by the conversion, the signal is processed by a CDS/AGC circuit 2, after which a conversion is made to a digital signal by an A/D converter circuit 3 pixel by pixel in successive fashion.
The digital signal that is output from the A/D converter circuit 3 has its white gain adjusted by a white balance circuit 4, whence the resulting signal is sent to a luminance notch circuit 12. The luminance notch circuit 12 uses a vertical low-pass filter (VLPF) to execute processing for reducing signal gain of a frequency in the vicinity of the Nyquist frequency in the vertical direction. Gain reduction processing by a horizontal low-pass filter (HLPF) is executed similarly in the horizontal direction. Such a filter shall be referred to as a “luminance notch filter” below. Next, a horizontal band-pass filter (HBPF) circuit 13 and vertical band-pass filter (VBPF) circuit 16 raise the frequency, which is slightly lower than the Nyquist frequency weakened by the notch filters.
Amplitude is adjusted subsequently by PP (aperture peak) gain circuits 14 and 17 in both the horizontal and vertical directions, and then low amplitude is cut, thereby eliminating noise, by base clipping (BC) circuits 15 and 18. The horizontal and vertical components are subsequently added by an adder 19, main gain is applied by an APC (Aperture Control) main gain circuit 20, and the resultant signal is added to a baseband signal by an adder 21. A gamma conversion circuit 22 then performs a gamma conversion and a luminance correction (YCOMP) circuit 23 executes a luminance-signal level correction based upon color.
Next, as an example of color signal processing, a color interpolation circuit 5 executes an interpolation with regard to all pixels in such a manner that all color pixel values will be present, and a color conversion matrix (MTX) circuit 6 converts each of the color signals to a luminance signal (Y) and color difference signals (Cr, Cb). Color-difference gain of low- and high-luminance regions is then suppressed by a chroma suppression (CSUP) circuit 7, and band is limited by a chroma low-pass filter (CLPF) circuit 8. The band-limited chroma signal is converted to an RGB signal and is simultaneously subjected to a gamma conversion by a gamma conversion circuit 9. The RGB signal resulting from the gamma conversion is again converted to Y, Cr, Cb signals, gain is adjusted again by a chroma gain knee (CGain Knee) circuit 10, and a linear clip matrix (LCMTX) circuit 11 makes a minor correction of hue and corrects a shift in hue caused by individual differences between image sensing devices.
Processing executed by the white balance circuit 4 in the image sensing apparatus of FIG. 13 will now be described. The image signal output from the image sensing device 1 and converted to a digital signal by the A/D converter circuit 3 is divided into a plurality (any number of) blocks of the kind shown in FIG. 14, and color evaluation values Cx, Cy, Y are calculated for each block based upon the following equations:Cx=(R−B)/YCy=(R+B−2G)/YY=(R+G+B)/2  (1)
The color evaluation values Cx, Cy of each block calculated in accordance with Equations (1) are compared with a previously set white detection region (described later). If the evaluation values fall within the white detection region, it is assumed that the block is white and then the summation values (SumR, SumG, SumB) of respective ones of the color pixels of the blocks assumed to be white are calculated. White balance gains kWB_R, kWB_G, kWB_B for each of the colors RGB are then calculated from the summation values using the following equations:kWB—R=1.0/SumRkWB—G=1.0/SumGkWB—B=1.0/SumB  (2)
The white balance circuit 4 performs the white balance correction using the white balance gains thus obtained.
FIGS. 15A and 15B are graphs illustrating a white detection region 101. In order to obtain the white detection region 101, a white object such as a standard white sheet (not shown) is sensed from high to low color temperatures using light sources at arbitrary color temperature intervals, and the color evaluation values Cx, Cy are calculated based upon Equations (1) using the signal values obtained from the image sensing device 1. Next, Cx and Cy obtained with regard to each of the light sources are plotted along the X axis and Y axis, respectively, and the plotted points are connected by straight lines. Alternatively, plotted points are approximated using a plurality of straight lines. As a result, a white detection axis 102 from high to low color temperatures is produced. In actuality, there are slight variations in spectral diffraction even for the color white. For this reason, the white detection axis 102 is provided with some width along the direction of the Y axis. This region is defined as the white detection region 101.
The conventional white balance detection apparatus, however, has certain drawbacks. For example, consider a case where a close-up of the human face is taken in the presence of a light source such as sunlight having a high color temperature. Though the color evaluation values of a white subject in the presence of sunlight are distributed as indicated by area 103 in FIG. 15A, the color evaluation values of the human complexion are distributed as indicated by area 105 in FIG. 15A. These values are distributed in an area substantially the same as that (the area indicated at 104 in FIG. 15A) of color evaluation values of the color white photographed in the presence of a light source such as white tungsten having a low color temperature.
Further, the color evaluation values are distributed as indicated in area 106 in FIG. 15B in a case where an area (on the order of 7000 K) in which the blue of the sky has grown pale, as at the horizon or at the boundaries of clouds, is included in a scene in, say, a scenery mode. This substantially coincides with the distribution of evaluation values (area 107 in FIG. 15B) of the color white sensed under cloudy conditions or in the shade. As a consequence, the color temperature of the scene is judged to be higher than it actually is (color temperature under clear skies is on the order of 5500 K) and the pale blue of the sky is corrected to white. This represents a judgment error.