1. Field of the Invention
The present invention relates to image sensors, and more particularly, to a method and apparatus for an image system having processor(s) with an analog-to-digital converter therein to thereby achieve better dynamic range of the image sensor in the image system.
2. Description of the Prior Art
Digital cameras have largely replaced conventional film cameras. Typically, a digital camera contains at least an image sensor for converting incident light into electrical charges. Each image sensor consists of an array of detectors, and each detector within the array converts incident light into an electronic signal representative of the magnitude of the incident light.
In digital camera technology, one of the most popular image sensors is CMOS image sensor. For CMOS image sensors, signals received from photo detectors are read out as column readout lines, one row at a time. During a data readout process, there is no shifting of charge from one pixel to another. Since the CMOS image sensors are compatible with typical CMOS fabrication processes, an integration of additional signal processing logic on the same substrate on which the sensor array is disposed hence is permitted.
Typically, a conventional CMOS image sensor usually contains an active-pixel sensor (APS). Generally speaking, a pixel is an element of an image sensor implemented for generating a differentiable strength output signal, where the differentiable strength output signal is proportional to the magnitude of incident light. Each pixel within the image sensor is implemented for detecting, storing, and sampling a signal.
However, the conventional image sensors have some drawbacks. For example, the conventional CMOS image sensors have limited dynamic range (DR). The specified term “dynamic range” represents a maximum ratio of light intensity level to the noise floor of any given scene in a single image that the image sensor is able to capture. An equation for illustrating the definition of the dynamic range DR is shown as follows:
                              D          ⁢                                          ⁢          R                =                              H            ⁢                                                  ⁢            L                                L            ⁢                                                  ⁢            L                                              (        1        )            
In the above equation (1), HL represents the highest non-saturated optical flux, while LL represents the lowest detectable optical flux (noise floor).
Please refer to FIG. 1. FIG. 1 is a diagram illustrating transfer curves of five different pixels P1-P5 in an imaging pixel array of an image sensor under different light intensity according to the prior art. As shown in FIG. 1, the imaging pixel array takes an image with a full integration time of t0(frame integration time). Pn denotes a pixel n, and In denotes the light intensity to which pixel n is exposed. The light intensity I1-I5 of the respective pixels P1-P5 has the following inequality:I5>I4>I3>I2>I1   (2)
The term “integration time” indicates a duration during which photo-generated carriers are collected by a pixel within the image sensor. As shown in FIG. 1, in a case where all the pixels P1-P5 are read out after the full integration time (i.e., t0), at this time there is only the pixel P1 that generates a non-saturated output Vo1 while all the remaining pixels (e.g., P2-P5) output the saturated output Vsat. The relation between the output voltages of the pixels P1-P5 is shown as follows:Vout1=Vo1   (3)Vout2=Vout3=Vout4=Vout5=Vsat   (4)
In the aforementioned description, readout schemes at the image pixels P2-P5 will include saturated spots where there are no meaningful image patterns. For avoiding the aforementioned saturated situation, when a pixel readout operation of the image sensor is a non-destructive process, a multiple-readout scheme can be applied to increase the dynamic range (DR).
As shown in FIG. 1, in contrast to the aforementioned single readout process, readouts at time intervals
            t      0        2    ,                    t        0            4        ⁢                  ⁢    and    ⁢                  ⁢                  t        0            8      can produce non-saturated outputs for pixels P2-P5 as a multiple-readout operation. In addition, the pixel readout process after a longer integration time can provide a better signal-to-noise ratio (SNR) if the readout voltage is not yet saturated.
As shown in FIG. 1, in the foregoing example readouts for each of the five pixels P1-P5 are listed as below:
t0 readoutP1outputVo1             t      0        2    ⁢  readoutP2outputVo2             t      0        4    ⁢  readoutP3outputVo3             t      0        8    ⁢  readoutP4outputVo4             t      0        8    ⁢  readoutP5outputVsat
For a linear response to incident light, the final equivalent outputs are listed as below:
P1Vo1P22Vo2P34Vo3P48Vo4P58Vsat
In the aforementioned case, the multiple-readout scheme at this time increases the saturation level by eight times. In this operating case, if the noise floor remains the same, the dynamic range will accordingly be increased by eight times. That is, if the minimum integration time is 1/m of the frame integration time (i.e., t0), the dynamic range will be increased by m times.
Please refer to FIG. 2 in conjunction with FIG. 1. FIG. 2 is a diagram illustrating a flowchart of the conventional operations of one pixel with a multiple-readout scheme for dynamic range enhancement. In FIG. 2, the conventional operation has a defect; that is, in such a manner every DR-enhanced pixel needs one memory and the conventional image sensor has to be delicate for determining whether the output voltage of each DR-enhanced pixels is saturated or not. As a result, the system of conventional image sensor with the multiple-readout scheme for improved DR is complex and the cost of the conventional image sensor with the multiple readout scheme (i.e., DR-enhanced pixels) is high.
Another conventional manner for implementing the image sensor is to digitize a signal at the pixel level. Please refer to FIG. 3. FIG. 3 is a diagram illustrating a pixel array and a pixel of the pixel array according to the prior art. The pixel 301 has limited use for most applications since, generally, only a certain dynamic range can be utilized. As shown in a sub-diagram (A) of FIG. 3, the pixel array 300 includes a plurality of pixels 301. Referring to a sub-diagram (B) of FIG. 3, the conventional pixel structure of the pixel 301 contains a detecting device 302 (which includes a light detector, a buffer, etc.), an analog-to-digital converter (ADC) 303, and a processing device 304, wherein the processing device 304 includes a processing logic, a memory, etc. Simultaneously, implementation of the pixel 301 is complex and expensive. Furthermore, as the pixel 301 has a complex structure, the pixel size is increased resulting in a trade-off as the un-required fixed-pattern noise (FPN) is also increased, where “FPN” represents image pattern (noise) associated with the physical location of the pixel array.
Therefore, a novel mechanism and method of image sensors for improving the dynamic range of an image sensor without increasing system complexity or cost is required.