1. Technical Field
The present invention relates to an imaging device provided with an autofocus function and an imaging method in the imaging device, and more specifically an imaging device and an imaging method capable of accurately performing autofocus processing to a target in which a point light source is dominant (for example, a night scene).
2. Description of the Related Art
Digital cameras (hereinafter referred to as “imaging devices”) provided with an autofocus unit (hereinafter referred to as “AF unit”) are known. The AF unit generates information for deciding an in-focus position using image data generated from image signals acquired for a target by an imaging element, and automatically moves the imaging lens to the in-focus position on the basis of the generated information.
There are multiple techniques of AF processing performed by the AF unit. The AF unit mounted on a general imaging device uses a hill climbing AF technique (for example, see Japanese Patent Application Publication No. S39-5265). The hill climbing AF technique calculates an integral value of brightness differences between neighboring pixels on the basis of image signals that an imaging element outputs based on a target image formed on the light-receiving surface, and determines the in-focus position using the integral value. The integral value mentioned above is information for deciding the in-focus position, and is referred to as an “AF evaluation value”.
When an imaging lens is at the in-focus position where a target image is focused on the light-receiving surface of the imaging element, the outline of the target image on the light-receiving surface is sharp. The AF evaluation value calculated using image data generated from image signals of the target image when the imaging lens is at the in-focus position is larger than that calculated when the imaging lens is at any other position (position other than the in-focus position). Accordingly, when the imaging lens is at the in-focus position, the AF evaluation value takes its maximum value theoretically. Moreover, the AF evaluation value increases as the imaging lens approaches the in-focus position from an out-of-focus position, and decreases as the imaging lens moves away from the in-focus position.
The hill climbing AF technique is to detect the peak position of the AF evaluation values on the basis of the increase or decrease tendency of the AF evaluation value mentioned above. The AF unit employing the hill climbing AF technique calculates the AF evaluation value while moving the imaging lens. The AF evaluation value is calculated at predetermined time points specified in advance or at constant time intervals. The position of the imaging lens where the AF evaluation value is maximum (the peak position of the AF evaluation values) can be determined by associating each AF evaluation value calculated, with the position of the imaging lens where the AF evaluation value is calculated. The imaging lens is moved to this determined position thereby to be automatically focused on the target.
As described above, the hill climbing AF technique involves moving the lens over an entire movement range once, determining the position of the lens where the maximum AF evaluation value can be obtained within the movement range, and moving the lens to the determined position.
More specifically, the start position of the AF operation is firstly set to the center position of the lens movement range, and the lens is moved to the position. Next, the lens is moved from the center position in a certain direction, for example, to the closest focus position, and then is reversely moved toward the infinite focus position. The AF evaluation value is calculated within the range of this movement at predetermined time points to determine the lens position where the maximum AF evaluation value can be obtained.
Meanwhile, recent imaging elements have increased resolution. Many of the imaging elements have several hundred mega pixels. The increase in the number of pixels of the imaging element makes the density of pixels higher, and accordingly makes a pixel pitch smaller. Having the smaller pixel pitch, each pixel has less sensitivity. In order to solve this problem, an imaging element having multiple driving conditions (driving modes) is known. The driving modes of the imaging element (hereinafter referred to as merely “driving modes”) are modes depending on which an image signal output operation of the imaging element changes by, for example, addition or reduction of image signals outputted from respective pixels for a target image formed on the light-receiving surface. For example, when the image signals are added up under a predetermined condition, the brightness is doubled. This increases the sensitivity.
Among the driving modes of the imaging element, the driving mode in which image signals are added up as described above is used for monitoring processing in which an image for checking a target to be photographed is displayed on an LCD at the time of image capturing. The image signals outputted In this driving mode, the image signals from pixels are added up and thereby the number of pixels in the image signals is reduced from the total number of pixels. In other words, the number of image signals for generating image data for the display is reduced, so that the density of the image becomes low. Accordingly, this driving mode is suitable for the monitoring processing of performing display processing with predetermined time intervals.
FIG. 21 depicts an example of a typical pixel array of an imaging element. The pixel array of FIG. 21 is referred to as Bayer array. Signals (image signals) read from the pixels of the imaging element of the Bayer array are added up in the horizontal direction and the vertical direction, thereby making it possible to reduce the number of signals to be processed in the following stages. For example, in generating image data used for the monitoring processing, the number of image signals is reduced by “addition and(or) reduction” of the image signals on the basis of a certain rule instead of processing every one of image signals outputted from all the pixels. FIG. 17 depicts an example of a signal readout state of a CCD 101 set in a driving mode with addition of image signals from two pixels in the horizontal direction and in the vertical direction.
However, the foregoing addition of the image signals from the pixels in the horizontal direction and of the image signals from the pixels in the vertical direction may cause a phenomenon in which the abnormal ΛF evaluation value is obtained for some target. This is because the addition of the pixels decreases the spatial frequency band of the image.
This phenomenon occurs differently depending on whether photographing is for a bright target at daytime or is for a dark target at night. FIGS. 26A to 26C are views showing examples of targets, FIG. 26A depicts a target at daytime, FIG. 26B depicts a target at night, and FIG. 26C depicts a target including a person at night. For example, in the photographing at daytime as shown in FIG. 26A, the contrast of target in the image is sharp because the target is photographed in a bright light condition. However, in the photographing at night as shown in FIG. 26B and FIG. 26C, the contrast of the target in the image is less sharp because the whole target is dark. In such photographing at night, if a building is included in the target, light leaking from the windows of rooms (illumination light or the like) almost dominates the whole target. When viewed at a distant place, the illumination light leaking from the building looks like “points”. The target dominated by the “objects looking like points” is referred to as a point light source target.
The point light source target has almost no contrast. Accordingly, the AF evaluation value generated from the point light source target follows the curve as shown in FIG. 23. In FIG. 23, the horizontal axis indicates the position of an imaging lens, and the longitudinal axis indicates the AF evaluation value. As shown in FIG. 23, a peak position cannot be determined on the basis of the AF evaluation values generated from the point light source target. In the photographing at daytime, even if point light source target largely occupies the whole target, a target around the point light source target is dominant in contrast, thereby making it possible to determine the peak position of the AF evaluation values. However, when a target which is dark as a whole includes a point light source target, for example, at night, the light of the point light source target expands more as the lens becomes more out of focus. This leads to the increased AF evaluation value, and the peak position cannot be determined.
To address this phenomenon, proposed is an imaging device capable of performing autofocus with high accuracy even at the low brightness by calculating an AF evaluation value while merging filters (for example, see Japanese Patent Application Publication No. 4679179). However, when a point light source target is photographed using the imaging device of Japanese Patent Application Publication No. 4679179, autofocus processing is more difficult because the frequency band is narrowed by the addition of pixels and the filters are merged.
Moreover, another imaging device is proposed which prevents the phenomenon mentioned above by correction on the basis of the ratio of high brightness portions (point light source portions) to improve the accuracy of the AF (for example, see Japanese Patent Application Publication No. 4553570). However, in photographing at night, even if the correction is made on the basis of the ratio of point light sources, it is difficult to actually determine the in-focus position because the brightness of surroundings of the point light sources is low.
As described above, a target in which point light sources are dominated (saturated) is difficult to automatically focus on because a peak of the AF evaluation values cannot be determined due to the low contrast of the target. Meanwhile, if the number of pixels added in the horizontal direction of the imaging element is reduced, the peak of the AF evaluation values can appear although the AF evaluation value would usually have no peak otherwise.
FIG. 24 is a graph showing AF evaluation values with the different numbers of horizontal pixels added, and indicates shifts in the AF evaluation values in cases where the number of horizontal pixels added is set to 1-pixel, 2-pixel, 3-pixel, and 4-pixel. As shown in FIG. 24, even if no peak appears in the AF evaluation values in the case of the “4-pixel”, the peak appears by reducing the number of pixels added in the horizontal direction. In FIG. 24, the peak appears in the AF evaluation values in the case of the horizontal 1-pixel addition.
Therefore, in a target such as a point light source target with the low (small) contrast, the factor for preventing a peak from appearing is considered not to be the point light source but to be the number of pixels added. However, if the number of pixels added is reduced (no pixel addition is performed), the sensitivity is decreased. Accordingly, it is required to cause a peak to appear in the AF evaluation values without decreasing the sensitivity.
Further, when the frequency band of the image is expanded without pixel addition, it is required to determine a frequency band in which a target is present. FIG. 27 is a histogram (a) showing an example of frequency distribution of an image for a certain target in, the histogram obtained by performing fast Fourier transform (FFT) processing on the image, and is a histogram (b) obtained by performing FFT processing on an image generated by horizontal 4-pixel addition from the image of the histogram (a). Thus, the frequency distribution (b) of the image by the horizontal 4-pixel addition is the same as the enlarged distribution in the left one-fourth portion of the frequency distribution (a) of the image with no pixel addition.
In other words, the pixel addition significantly limits the frequency band of the image. Accordingly, generation of the AF evaluation value on the basis of the limited image actually results in the AF evaluation value in the low-frequency band even by use of a high-pass filter for the image data. Thus, for a low-frequency band dominant target such as a point light source target, it is desired to reduce the number of pixels added and to apply a high-pass filter. However, simply reducing the number of pixels added and applying the high-pass filter cause another problem.
For example, as shown in FIG. 26C, in a target where a person is present in a point light source, the AF evaluation value might be decreased due to the person as the low-frequency band dominant target. In other words, the AF operation needs to be performed by detecting the frequency component of a target, and if necessary, by changing the number of pixels added and the transmit frequency band of a filter.