The present disclosure relates to an imaging apparatus, a signal processing method, and a program. More particularly, it relates to an imaging apparatus, a signal processing method, and a program, enabling generating of images with wide dynamic range and high quality, by image synthesizing processing using multiple images with different exposure times.
Solid-state imaging devices such as CCD (Charge-Coupled Device) image sensors and CMOS (Complementary Metal Oxide Semiconductor) image sensors, used with video cameras and digital still cameras and the like, accumulate charge corresponding to the amount of incident light, and perform photoelectric conversion in which an electric signal corresponding to the accumulated charge is output. However, there is an upper limit to the amount of charge accumulated in the photoelectric conversion device, and in the event of receiving light of a certain amount or greater the amount of charge accumulated reaches a saturation level, resulting in what is called “clipped whites” where subject regions of a certain brightness or more is set to a saturated luminance level.
In order to prevent such a phenomenon, processing is performed such as adjusting the exposure time by controlling the charge accumulation period at the photoelectric conversion device in accordance with change in external light and so forth, so as to control the sensitivity to an optimal value. For example, with regard to a bright subject, the shutter is released at high speed so as to reduce the exposure time, thereby shortening the charge accumulation period at the photoelectric conversion device, and outputting electric signals before the accumulated charge amount reaches the saturation level. Such processing enables output of an image in which gradient corresponding to the subject is accurately reproduced.
However, when imaging a subject including both bright and dark portions, releasing the shutter at high speed results in insufficient exposure time at dark portions, deteriorating the S/N (Signal-to-Noise) ratio and the image quality suffers. In order to accurately reproduce the luminance level of bright portions and dark portions in a shot image of a subject where there are both bright portions and dark portions, a high S/N (Source-to-Noise) ratio by a long exposure time has to be realized for pixels with little incident light on the image sensor, as well as processing to avoid saturation at pixels with greater light input.
There is related art which uses multiple images with different exposure times. This technique involves using a long-period exposure image for dark images, and uses a short-period exposure image for image regions where clipped whites would occur with a long exposure period, so as to determine and optimal pixel level. By synthesizing multiple different exposure images, an image with a wide dynamic range and no clipped whites, can be obtained.
For example, disclosed in Japanese Unexamined Patent Application Publication No. 2008-99158 and Japanese Unexamined Patent Application Publication No. 2008-227697 are configurations for synthesizing multiple images of different exposure amounts to obtain an image with a wide dynamic range.
FIG. 1 is a block diagram of an imaging apparatus 10 according to the related art which synthesizes images of two types of sensitivity obtained by switching the exposure time of an imaging device between long exposure and short exposure at each vertical period, and generates a wide dynamic range image.
Processing for generating the wide dynamic range image with the imaging apparatus 10 will be described. Light input through a lens 11 is subjected to photoelectric conversion at an imaging device 12, the output picture signals are subjected to correlated double sampling processing and AGC (Automatic Gain Control) at an analog front end 13, and then subjected to A/D (Analog-to-Digital) conversion to become a digital signal. Digital imaging signals output from an analog front end 13 are input to a signal processing unit 20.
At the signal processing unit 20, first, Y signals which are luminance signals, and R signals, G signals, and B signals, which are color signals, are generated by a YRGB generating circuit 21, YRGB also being called Luminance RGB. The Y signals, R signals, G signals, and B signals output from the YRGB generating circuit 21 are subjected to appropriate signal processing at a first signal processing unit 22, and then write processing to memory 23 is performed.
The memory 23 stores a low-sensitivity image and high-sensitivity image with different exposure time at the imaging device, i.e., a long exposure image and short exposure image. Thereafter, the long exposure image and short exposure image are read out from the memory 23 and input to an image synthesizing unit 24 (performing WDR (Wide Dynamic Range synthesizing)), where processing is performed to obtain a wide dynamic range by image synthesizing.
Subsequently, the synthesized image with wide dynamic range is input to an image correction unit 25 to perform γ correction processing and so forth, including processing for converting the YRGB signals into YCbCr signals including color difference signals for example, and further, final signal processing is performed at a second signal processing unit 26 and a final output image is generated.
Note that often, the first signal processing unit 22 performs frequency correction, signal level correction, WB (White Balance) correction, and so forth, the first signal processing unit 22 and the second signal processing unit 26 perform vertical inversion of the image, slow-shutter/still-shutter operations, hand movement compensation, electronic zoom and so forth, in conjunction with control of the memory 23, and the second signal processing unit 26 performs peak clipping, generating of color difference signals, OSD (On Screen Display), output encoding processing, and so forth.
With imaging apparatuses generating wide'dynamic range images by synthesizing processing of long exposure images and short exposure images, the rated value of the output signal level from the imaging apparatus 10 is the same between a normal image which has not been made a wide dynamic range image, and an image which has been made a wide dynamic range image. Accordingly, the contrast and brightness of the subject in the normal image which has not been made a wide dynamic range image and an image which has been made a wide dynamic range image will differ from each other depending on the state of the subject, and the image which has been made a wide dynamic range image is often not the desirable image, and accordingly, many arrangements enable both shooting of normal images not made a wide dynamic range image and images made a wide dynamic range image. Many arrangements are made such that whether or not to make a wide dynamic range image is selected in accordance with the state of the subject.
As shown in FIG. 1 here, the synthesizing processing by the image synthesizing unit 24 which performs processing for making a wide dynamic range image by synthesizing images has to be performed before the γ processing by the image correction unit 25. Accordingly, for example, with an imaging apparatus which generates Y, Cr, and Cb, 8 bits each, as output signals, according to the related art, write/read data to the memory is subjected to write/read in a data format called the 422 format with resolution of one pair of color signals to two pixels of luminance signals, at 10 bits or more each of luminance signals (Y) and color signals (G), (R), and (B), as shown in FIGS. 2A and 2B.
As shown in FIG. 2A, data read/write is performed in 422 format where the signals in increments of pixels are used for the luminance signal (Y), but an average value of two pixels is used for the color signals (G), (R), and (B). FIG. 2B illustrates an example of setting bits for each of the signals in a case of performing read/write with 10 bits of luminance signals (Y) and 12 bits of color signals (G), (R), and (B).
Note that for data read methods from the imaging device, the related art includes the progressive read method in which all pixels are read out as independent signals, and the interlaced read method in which the pixels of two vertically adjacent lines are mixed and read out.
The imaging device driving frequencies for the NTSC and PAL formats which are interlaced display formats are 13.5 MHz, 14 MHz, 18 MHz, and so forth, in the case of interlaced readout from the imaging device, while in the case of progressive readout, the double thereof which are 27 MHz, 28 MHz, and 36 MHz, are often used.
In the event that the read method from the imaging device is progressive read, two reads have to be performed for one write in order to perform synthesizing processing of two images to make a wide dynamic range, so in order to have one memory 23, the memory has to be operated at a frequency of at least three times the driving frequency of the imaging device, and in the case of progressive read, which is the double thereof which is 81 MHz, approximately 86 MHz, and 108 MHz.
In order to perform read/write of YRGB in the 422 format, 50÷2=25 bits of data are used per one clock, so either 32-bit data width memory has to be used, or the frequency further has to be doubled to 162 MHz, approximately 171 MHz, and 216 MHz, to use 16-bit memory. Four images have to be stored in order to obtain a wide dynamic range image with an imaging apparatus compatible with the NTSC or PAL formats, and while less than 64 MB is sufficient for the data size even when operating under progressive, 133 to 166 MHz is the maximum frequency for memory corresponding to 64 MB, so instead of using 133 MHz, the data width had to be 32 bits or high-speed memory had to be employed.
With the former, there have been the problems such as increased mounting area due to increased memory control lines or using two memory devices, becoming expensive due to having to use types of memory devices with little demand, and so forth. Also, there has been the problem of deterioration in image quality due to insufficient data resolution in the event of reducing the data format of the memory to 16 bits or lower, and particularly, this deterioration in image quality has been conspicuous in a case of not performing WDR synthesizing for making a wide dynamic range. Also, there has been the problem of error images occurring at the time of switching with imaging apparatus which operate by switching between the two types of data formats which are wide dynamic range images and normal images.