In recent years, imaging devices that use a CMOS (Complementary Metal Oxide Semiconductor) image sensor, which has features such as a small size, low power consumption, and high-speed imaging, have been making a breakthrough in the field of consumer video cameras and professional video cameras.
A CMOS image sensor has various features that a CCD (Charge Coupled Device) does not have, and the method of reading out charge accumulated in photodiodes (hereinafter, “PD”) is also different between a CMOS image sensor and a CCD image sensor.
With a CCD image sensor, PD charge readout is performed at the same time in all pixels, that is to say, using the so-called global shutter method. On the other hand, with a CMOS image sensor, PD charge readout is performed using the so-called rolling shutter method in which the readout time is shifted line-by-line (pixel-by-pixel). A CMOS image sensor therefore has disadvantages that a CCD image sensor does not have, due to the fact that the accumulated charged readout time is shifted, and the timing of the accumulation period of each pixel is shifted.
One example of a problem is the phenomenon in which white band-shaped artifacts appear in the captured screen when a video camera with a CMOS image sensor captures images of a subject being illuminated with flashes from a still camera or the like. Here, “white band-shaped artifact” refers to the phenomenon in which only part of a certain frame of a captured image is influenced by a flash, and only the portion above a midline (screen upper portion) or below it (screen lower portion) becomes brighter.
This phenomenon is described below with reference to FIGS. 34 and 35.
FIG. 34 is a diagram illustrating an imaging scene in which both a video camera and still cameras are present, such as a press conference.
FIG. 34 illustrates an imaging scene including a video camera 100, a monitor 101 displaying the imaging signal thereof, still cameras 102 and 103, and a subject 104. The video camera 100 uses a CMOS image sensor.
In such an imaging scene, when the still cameras 102 and 103 emit flashes, white band-shaped artifacts appear on the screen of the monitor 101 that is displaying the imaging signal from the video camera 100. This principle is described below.
FIG. 35 is a diagram illustrating the accumulation period (exposure period), the readout timing, the readout period, and the scan period of the video camera 100.
FIG. 35 shows charge accumulation periods and scan periods for reading out such charge for each scan line constituting the screen (image captured by the video camera 100), with the horizontal axis indicating the time axis. Envisioning the case of a high-definition camera, the total number of scan lines is assumed to be 1,125. Also, “monitor screen 0 period” shown in FIG. 35 is the period in which the imaging signal of frame 0 is output to the monitor screen or the like, and is assumed here to be 1/60 sec. The same also follows for “monitor screen 1 period” and the like.
In the video camera 100, for line 1 for example, which is the top line of the screen (one line worth of pixels (a PD being disposed in each pixel) on the face of the imaging element of the CMOS image sensor for acquiring a video signal forming line 1), PD accumulation (charge accumulation in a PD) for frame 1 starts exactly when the monitor screen 0 period starts, and the PD accumulation ends when one frame period, that is to say, the monitor screen 0 period, ends. Immediately thereafter (immediately after the PD accumulation ends), scanning for accumulated charge readout (“accumulated charged readout” will sometimes be simply referred to as “readout”) for the accumulated PD signal of line 1 starts, and PD accumulation for the next frame 2 starts at the same time. Since 1,125 lines are scanned in one frame period ( 1/60 sec), the PD signal readout scan period is 1/60/1,125≈1.48×10−5 sec.
Next, for line 2, PD accumulation starts in conformity with the end of the PD readout scan period for frame 0 in line 1. In other words, PD accumulation and readout operations for line 2 are performed delayed with respect to those for line 1 by an amount corresponding to the PD readout scan period. The same operations as those described above are performed for line 3 and subsequent lines as well.
In this way, with the rolling shutter method, the charge accumulation periods for the lines constituting a frame are shifted little by little from top to bottom as shown in FIG. 35. Accordingly, the scan periods of the lines, that is to say, the PD signal readout times, are also immediately after the charge accumulation periods of the lines as shown in FIG. 35. In other words, with the video camera 100 using a CMOS image sensor, PD signal readout processing is sequentially performed in line order, such as the PD signal for line 1 being read out, and then the PD signal for line 2 being read out.
Here, as shown in FIG. 35, when a flash is emitted near the middle of the monitor screen 1 period (the period indicated as “flash emission period” in FIG. 35), the bright flash light influences the latter half of the charge accumulation period for frame 1 and the first half of the charge accumulation period for frame 2. As shown in FIG. 35, the flash emitted in the monitor screen 1 period spans the charge accumulation times and the charge readout times for frame 1 and frame 2 in line X and line Y.
Specifically, the bright flash light in the case shown in FIG. 35 has the following influence.
(Frame 1, lines a1 (lines belonging to portion indicated as “a1” in FIG. 35)):
In frame 1, the lines a1 portion before line X is not influenced by the flash light (the charge accumulation period has already ended).
(Frame 1, line X to line Y (lines belonging to portion indicated as “a2” in FIG. 35)):
In frame 1, the lines a2 portion in the period between line X and line Y is influenced by the flash light, and the accumulated light quantity gradually increases.
(Frame 1, line Y and subsequent lines (lines belonging to portion indicated as “a3” in FIG. 35)):
The lines a3 portion from line Y onward is influenced by the full light quantity of the flash light.
(Frame 2, lines b1 (lines belonging to portion indicated as “b1” in FIG. 35)):
Conversely, in frame 2, the lines b1 portion before line X is influenced by the full light quantity of the flash light.
(Frame 2, line X to line Y (lines belonging to portion indicated as “b2” in FIG. 35)):
In the lines b2 portion in the period between line X and line Y, the influence of the flash light gradually decreases.
(Frame 2, line Y and subsequent lines (lines belonging to portion indicated as “b3” in FIG. 35)):
In the lines b3 portion from line Y onward, the flash light has no influence since the accumulation period has not started yet.
Accordingly, assuming that the flash light emission period is just for a moment, and that the transient periods corresponding to the portions a2 and b2 in FIG. 35 are not present, on the monitor screen, as shown in the lower portion of FIG. 35, basically the lower half of the monitor screen 1 (the screen (image) formed by the imaging signal of frame 1) is bright, and the upper half of the monitor screen 2 (the screen (image) formed by the imaging signal of frame 2) is bright, and thus white band-shaped artifacts appear on the video display device. In the case of an imaging device using a CCD image sensor, unlike a CMOS image sensor, charge accumulation is performed at the same time for all of the lines constituting a frame, and thus the above problem does not occur, and a natural image that is entirely bright appears when a flash is emitted.
In this way, with an imaging device that uses a CMOS image sensor, there is the first problem that white band-shaped artifacts appear in a video formed by an imaging signal when an external flash is emitted. Also, with an imaging device that uses a CMOS image sensor, there is the second problem that performing appropriate processing with respect to white band-shaped artifacts that appear in a video requires appropriately detecting whether white band-shaped artifacts have appeared due to influence of an external flash.
First, a description will be given of conventional technology for solving the first problem.
The imaging device disclosed in Patent Literature 1 is an example of a conventional imaging device that solves the first problem.
FIG. 36 is a block diagram showing an example of the configuration of a conventional imaging device 900. The imaging device 900 is a so-called digital still camera that mainly records still images.
As shown in FIG. 36, the imaging device 900 includes an imaging unit 113, an image processing unit 114, a recording/display processing unit 116, a buffer 117, an evaluation unit 120, a storage unit 121, and a control unit 123.
The imaging unit 113 includes an imaging element, a CDS (Correlated Double Sampling) circuit, an A/D (Analog/Digital) circuit, a signal generator (SG), a timing generator (TG), and the like that are not shown, and the imaging unit 113 captures images of a subject and supplies images obtained as a result to the image processing unit 114.
The imaging element is constituted by a CCD image sensor, a CMOS image sensor, or the like, and the imaging element acquires an image signal that is an electrical signal by receiving incident light from a subject and performing photoelectric conversion, and outputs the acquired image signal. The imaging element is constituted by a plurality of pixels disposed planarly in a lattice arrangement, each pixel accumulating a charge according to the quantity of received light, and the imaging element receives light for a predetermined exposure time in accordance with a horizontal drive signal and a vertical drive signal that are supplied from the timing generator. The pixels of the imaging element accumulate a charge according to the quantity of received light, and the imaging element supplies the charges to the CDS circuit as an analog image signal.
The CDS circuit eliminates a noise component of the analog image signal supplied from the imaging element by performing correlated double sampling.
The image signal from which the noise component has been eliminated is supplied by the CDS circuit to the A/D circuit.
The A/D circuit performs A/D conversion on the analog image signal from the CDS circuit, and supplies the digital image data obtained as a result to the image processing unit 114.
Under control of the control unit 123, the signal generator generates a horizontal synchronization signal and a vertical synchronization signal, and outputs them to the timing generator.
Based on the horizontal synchronization signal and the vertical synchronization signal supplied from the signal generator, the timing generator generates the horizontal drive signal and the vertical drive signal for driving the imaging element, and supplies them to the imaging element.
The image processing unit 114 includes a Y-C separation circuit, a filter circuit, a WB (White Balance) circuit, an aperture compensation/gamma circuit, and the like that are not shown, and the image processing unit 114 performs predetermined image processing on the image data supplied from the A/D circuit of the imaging unit 113.
The image processing unit 114 supplies the image (image data) subjected to image processing to the recording/display processing unit 116 and the evaluation unit 120.
The Y-C separation circuit performs Y-C separation processing for separating the image data from the imaging unit 113 into a luminance signal (Y signal) and a chrominance signal (C signal).
The filter circuit performs noise reduction processing for filtering the image data from the imaging unit 113 and removing a noise component included in the image data.
The WB circuit performs processing for adjusting the white balance of an image by multiplying the image data from the imaging unit 113 by a gain so as to, for example, equalize the R, G, and B levels of a white subject.
The aperture compensation/gamma circuit subjects the image data from the imaging unit 113 to processing for adjusting the image quality through, for example, aperture correction for emphasizing edge portions in an image and gamma correction for adjusting the shade of an image.
The recording/display processing unit 116 receives the image (data) subjected to image processing from the image processing unit 114, and performs output control for outputting the image to a recording unit or a display unit that are not shown.
The buffer 117 stores data that needs to be temporarily stored when the recording/display processing unit 116 performs output control.
The evaluation unit 120 includes a detection circuit (not shown) that detects the brightness and color distribution of an image, and receives the image captured by the imaging element from the image processing unit 114.
The detection circuit performs detection on the image supplied from the image processing unit 114, and outputs information obtained as a result (e.g., information indicating the brightness and color distribution of a predetermined portion of the image, and information indicating the spatial frequency of a predetermined portion of the image) as an evaluation value with respect to the image captured by the imaging element.
The evaluation unit 120 supplies the evaluation value output by the detection circuit to the control unit 123.
The storage unit 121 is made up of a ROM (Read Only Memory), a RAM (Random Access Memory), an EEPROM (Electrically Erasable and Programmable Read Only Memory), or the like, and the storage unit 121 stores, for example, programs executed by a CPU (Central Processing Unit) (not shown) of the control unit 123, data necessary when the control unit 123 performs processing, and data that needs to be held even when the imaging device is powered off.
The control unit 123 includes the CPU and a calculation circuit that calculates a difference value, neither of which are shown. The control unit 123 receives the evaluation value with respect to the image captured by the imaging element from the evaluation unit 120, and temporarily stores the evaluation value received from the evaluation unit 120 in the storage unit 121.
The calculation circuit calculates a difference value between two predetermined evaluation values. The control unit 123 then controls various units of the imaging device based on the difference value calculated by the calculation circuit.
According this configuration, with the conventional imaging device 900, when a still image or moving image is captured by the imaging unit 113 in accordance with a user operation for example, the captured image is subjected to predetermined image processing by the image processing unit 114, and thereafter supplied to the recording/display processing unit 116 and the evaluation unit 120.
The recording/display processing unit 116 causes the image obtained after the predetermined image processing was carried out by the image processing unit 114 to be buffered in the buffer 117, and in the evaluation unit 120, an evaluation value is generated for the image by the detection circuit and supplied to the control unit 123. The control unit 123 then temporarily stores the evaluation value in the storage unit 121.
The calculation circuit of the control unit 123 calculates a difference value between this evaluation value and an evaluation value that was previously stored in the storage unit 121, that is to say, an evaluation value generated from the image of the frame one frame earlier. If the difference value is greater than or equal to a reference value that has been set in advance, it is determined that the image has been negatively influenced by an external flash, and if the difference value is less than the reference value, it is determined that the image has not been influenced by an external flash. The control unit 123 controls various units of the imaging device in accordance with the result of the determination, discards the image in the case of determining that the image has been negatively influenced by an external flash, and outputs the image in the case of determining that the image has not been negatively influenced by an external flash.
Accordingly, the conventional imaging device 900 solves the problem of white band-shaped artifacts that appear due to an external flash.
Next, a description will be given of conventional technology for solving the second problem.
The technology disclosed in Patent Literature 2 is an example of conventional technology that solves the second problem.
With this conventional technology, whether an imaging signal has been influenced by a flash (external flash) is determined by dividing a video into blocks of an appropriate size and examining whether the brightness of a block has risen compared to that of a block at the same position in the previous field. Specifically, as shown in FIG. 37, a difference unit 1102 calculates the difference between an input imaging signal VI and a signal obtained by a delay unit 1101 delaying the input imaging signal VI by one field, a summation unit 1103 obtains a sum of the differences for each of the predetermined number of blocks that the image was divided into, a counter 1105 counts the number of blocks for which the sum value is greater than a, and it is determined that a flash (external flash) was emitted (the imaging signal has been influenced by an external flash) if the counted number is greater than or equal to 13 and furthermore less than or equal to y. This conventional technology enables determining whether an imaging signal has been influenced by an external flash.