Numerous everyday scenes have a far greater intensity variance range than can be recorded on an electronic imaging apparatus (e.g., a still or video camera). This is because electronic imaging apparatus exhibit limited dynamic range (response) i.e. the exposure time, which must be long enough for sufficient electronic detection of shadowed areas has to be at the same time, short enough to prevent saturation within high intensity areas. This is an impossibility for most images. The result of the limited dynamic range thus becomes a compromise, leading to reduced detail in both shadowed and highlighted regions. The deep shadows are recorded as undifferentiated black and the bright highlights are “washed out” as undifferentiated white.
Scenes with a high dynamic range, having both strong highlights and important shadow detail cannot be imaged with substantial fidelity. In effect, the operator must choose how much of the tonal scale is to be sacrificed. Consider, for example, an image of an individual in front of a bright window. Exposing for the person will result in the window part appearing as a uniformly white background shape, while exposing for the outdoor scene visible through the window turns the person into a black image or a silhouette.
In electronic photography, images are represented as an ordered series of picture elements, or “pixels”, arranged in a linear or planar grid. On the display, each pixel will ordinarily specify a luminance or brightness value and chrominance values that specify the color of the pixel. Intensity variance is typically characterized in terms of image “depth”, referring to the number of discrete brightness levels that the system is capable of resolving and displaying.
A number of methods have been suggested in the past to overcome this problem. Methods that include exposing the camera using different exposures and combining the resultant images in various ways are among the more frequently used methods. The different exposures can be obtained by changing the integration time, which is the time that the pixel is exposed to incoming light, or by adjusting the amount of incoming light that falls on some or all of the pixels. Integration time for each frame begins with a reset signal and ends with a sample signal, i.e. the reading. In prior active pixel sensors (APS) having global control, all pixels in the sensor array are reset and later sampled at the same time.
The problems with these methods are in two areas. Firstly, in each exposure, any movement in the image, real movement in the scene or camera movement, is translated differently in different exposure times and as a result the overall image, obtained by combining the different exposures, becomes distorted. The second area in which problems arise is the response of the system to strong changes in the intensity of the scene. In some systems the ratio of the exposures is kept constant, and then the coverage of the intensity variance of the real scene is not optimal. In others, the exposures are changed independently and it takes long time to adapt to a new scene.
Many of the prior art methods that attempt to solve the problem of the limited dynamic range of electronic cameras such as those based on CCD technology and especially CMOS detectors make use of a technique known as pixel reset. In this technique additional electronic circuitry is added to the pixel array and additional steps to the image processing algorithms in order to reset the pixels a number of times during each integration time for the frame. The total number of photons (charge) accumulated during the entire integration time for the frame is then determined by adding up the total number of photons accumulated per each reset of the pixel with the addition of the accumulation after the last reset. This technique is accomplished in a variety of ways. In some cases, the entire array of pixels is subject to resets, which occur according to a preset criterion, usually based on time.
In other methods, each individual pixel is provided with circuitry which resets the pixel when it has filed up to a certain level. Additional circuitry associated with each pixel counts and remembers the number of resets for the pixel. Various publications teach different methods for determining the criterion for resetting the pixel to zero and for determining the total integrated charge. In the most basic sense the total integrated charge for each pixel is the residual charge measured at the time of the sample signal plus the number of resets times the amount of charge allowed to accumulate before the pixel is reset. Variations of this method of extending the dynamic range are taught for example in U.S. Pat. No. 6,831,689, U.S. Pat. No. 6,927,796, U.S. Pat. No. 5,872,596, and in international patent application WO 93/14595. The common disadvantage of all the prior art methods of individual pixel reset, especially for CMOS cameras, is the additional cost of extra counting and memory circuitry, or analog type accumulators in other solutions, which must be added at each pixel site. Also, there is the fact that additional circuit elements require space, thereby effectively reducing the light gathering ability of the array or increasing its size with attendant cost increases and signal/noise problems. One solution to the space problem is to remove the counting and memory storage elements from the pixel site to the central processing area of the detector. This solution, however, requires also cumbersome signal sampling and transfer procedures that affect the overall performance of the detector. It may also impose limitations on the actual time each pixel can perform its reset cycle, if the design imposes the need for the “attention” of the central processing unit. Therefore this method reduces the accuracy and sensitivity of the solution.
It is therefore a purpose of the present invention to provide a method and apparatus for imaging of scenes having large intensity variance.
It is another purpose of the present invention to provide a method and apparatus for expanding the dynamic range of an electronic camera necessitating a relatively small amount of functions of the circuitry or, by providing a relatively small hardware change to existing pixel arrays and augmenting an image processing technique to estimate and determine the true value of the intensity of the light gathered by each individual pixel in the array.
Further purposes and advantages of this invention will appear as the description proceeds.