Image sensing devices can be realized with semiconductors based on their capability to convert locally impinging light energy into a proportional amount of electronic charge. More specifically, if a picture element is exposed during time T to the local light power PL, a charge signal Q is created according to the equationQ=PLγT  (1)where γ denotes the conversion efficiency of photons into electronic charge, which strongly depends on the wavelength spectrum of the incoming light. This charge Q, often called photocharge, can be stored in a two-dimensional array of charge storage devices such as a reverse biased diode (as in photodiode-based image sensors) or such as a pre-charged metal-oxide-semiconductor capacitance (as in charge-coupled devices, CCDs).
Due to the limited storage capacity of these storage devices, the ratio of maximum storable photocharge to photocharge detection noise, called dynamic range, is also limited. In typical CCD or photodiode image sensors, the available dynamic range is of the order of 1'000:1. Unfortunately, natural scenes (where there is no control of lighting conditions) and indoor scenes with highly varying illumination have a dynamic range of between 100'000:1 and 1'000'000:1.
The dynamic range of an image sensor can be increased by making use of the exposure time dependence shown in Equation (1). The patents U.S. Pat. No. 5,144,442, U.S. Pat. No. 5,168,532, U.S. Pat. No. 5,309,243 and U.S. Pat. No. 5,517,242 describe methods based on the acquisition of two or more images, each with its individual exposure time. At least one complete image has to be stored with these methods, preferentially in digital form by making use of a frame-store. This results in a complex and cost-intensive system. Moreover, the two or more images with different exposure times cannot be taken concurrently, therefore not representing the same moving scenes at different exposure levels but rather at different points in time. Consequently, such methods exhibit undesirable temporal aliasing.
This problem can be overcome by a method described in U.S. Pat. No. 5,483,365 and U.S. Pat. No. 5,789,737. The approach taken in U.S. Pat. No. 5,483,365 consists of using alternate image sensor rows for different exposure times. U.S. Pat. No. 5,789,737 teaches the use of several picture elements (pixels), each with its own sensitivity. In both cases, the brightness information may be acquired concurrently in time but not at the identical geometrical pixel location. This implies spatial undersampling and aliasing, which is particularly undesirable in the case of so-called highlights, i.e., localized very bright pixel values usually caused by specular reflections at objects in the scene.
Once a plurality of images have been taken at different exposure times, they have to be fused or merged to form one single piece of pixel information of wide dynamic range. Patents U.S. Pat. No. 4,647,975, U.S. Pat. No. 5,168,532 and U.S. Pat. No. 5,671,013 teach that the information is copied from the most suitable of the images, according to some selection rule. This value is then multiplied with a suitable factor that corrects for the respective exposure time. This method works well only for ideal image sensors with completely linear pixel behavior, irrespective of illumination level and exposure time. In practical image sensors, this is not true, and the resulting response curve (output value vs. illumination levels) shows discontinuities. This is particularly disturbing if the resulting images are processed further, leading to false contours and erroneous contrast values. An improvement is taught by U.S. Pat. No. 5,517,242 claiming an algorithm where the output value at each pixel site is calculated in a certain brightness range as a linear combination of the values at two different exposure times, corrected by an appropriate factor that compensates for the different exposure times. In all these methods, either complete images have to be stored, or complex and surface-intensive electronic circuitry in each pixel is required.
It is the aim of the invention to overcome the aforementioned disadvantages of the prior-art methods and image sensors. In particular, the invention shall make possible a wide dynamic range but obviate the need for a storage of complete images. It shall, moreover, reduce temporal and spatial aliasing to a minimum. The problem is solved by the invention as defined in the independent claims.