In a typical prior art image sensor, light generated carriers are integrated for a predetermined fixed integration time. After the integration is completed, charge is converted to voltage and read out. Voltage can be read out in each pixel directly by an x-y scanner or by a single charge detector, common to the whole array, if a CCD charge transferring principle is employed. The output signal from such devices is always an analog voltage proportional to the product of light intensity and integration time.
There are several disadvantages to the standard prior art approach. The analog signal level is small in the areas of low illumination and high in the areas of high illumination. The charge detector thus must have a high dynamic range and a high linearity over this range. A very low noise floor is also required to detect low level signals. Another disadvantage is the possibility for a flicker caused by the beat between the illuminating source frequency (fluorescent lighting) and the frame scanning frequency determined by the integration period. Many problems also arise when the analog signal needs to be converted to its digital equivalent. Complicated signal conditioning circuits such as CDS and AGC amplifiers need to be used to interface between the sensor and the A/D converter. This is power consuming, costly, and potentially distorts the signal if not properly implemented.