Solid state image sensors are well known. Virtually all solid-state imaging sensors have as key element a photosensitive element being a photoreceptor, a photo diode, a photo transistor, a CCD gate, or alike. Typically, the signal of such a photosensitive element is a current which is proportional to the amount of electromagnetic radiation (light) falling onto the photosensitive element.
A structure with a photosensitive element included in a circuit having accompanying electronics is called a pixel. Such pixel can be arranged in an array of pixels so as to build focal plane arrays of rows and columns.
Commonly such solid state image sensors are implemented in a CCD-technology or in a CMOS- or MOS-technology. Solid state image sensors find a widespread use in devices such as camera systems. In this application a matrix of pixels comprising light sensitive elements constitutes an image sensor, which is mounted in the camera system. The signal of said matrix is measured and multiplexed to a so-called video-signal.
Of the image sensors implemented in a CMOS- or MOS-technology, CMOS or MOS image sensors with passive pixels and CMOS or MOS image sensors with active pixels are distinguished. An active pixel is configured with means integrated in the pixel to amplify the charge that is collected on the light sensitive element. Passive pixels do not have said means and require a charge sensitive amplifier that is not integrated in the pixel. For this reason, active pixel image sensors are potentially less sensitive to noise fluctuations than passive pixels. Due to the additional electronics in the active pixel, an active pixel image sensor may be equipped to execute more sophisticated functions, which can be advantageous for the performance of the camera system. Said functions can include filtering, operation at higher speed or operation in more extreme illuminations conditions.
Examples of such imaging sensors are disclosed in EP-A-0739039, in EP-A-0632930 and in U.S. Pat. No. 5,608,204. The imaging devices based on the pixel structures as disclosed in these patent applications however are still subject to deficiencies in the image quality of the devices.
A problem in these CMOS based imaging devices appears because material imperfections and technology variations have as effect that there is a non-uniformity in the response of the pixels in the array. This effect is caused by a non-uniformity or fixed pattern noise (FPN) or by a photoresponse non-uniformity (PRNU). Correction of the non-uniformity needs some type of calibration, e.g. by multiplying or adding/subtracting the pixel's signals with a correction amount that is pixel-dependent.
Several methods to cancel FPN are based on techniques that are often called correlated double sampling or offset compensation. The methods in general are based on the following: the signal of the pixel is subtracted from the signal of the same pixels in a reference state (this reference state is typically the reset or dark state). The difference of both signal is free of pixel-related non-uniformity. However, if the differencing circuit is common for a part of the imager (typically, common for one column), a new non-uniformity will originate due to the non-uniformity of the differencing circuits. In a typical APS imager with common column buffers or column amplifiers, the new fixed pattern noise is column dependent, and is visible in the image as a shade of vertical stripes.
A stripe-shaped FPN is much more annoying than a pure statistical FPN. It is seen in experiments that a true random FPN of 5% RMS is barely visible to the human eye, whereas a stripe-shaped FPN remains visible even when the amplitude is below 1% RMS. The reason is that the human eye has a kind of built-in spatial filter that recognises larger structures even when they have low contrast.
Even if in the case that we have no fixed pattern noise, the photoresponse non-uniformity can be different from 0.
Another problem arises due to processing imperfections, statistics, etc. This means that typically, a finite number of pixels in a pixel array will be defective (hard faults) or yield a signal that deviates visibly from the “exact” pixel value. Such faults appear as white or black (or grey) points in the image. This class of faults in the sequel is called an isolated pixel value.
A known way to cancel these spots is to store a list of them and of their positions in the image in a memory unit in the device. In an image processing step, the isolated pixel value is then replaced by e.g. the average of the surrounding pixels. This method is viable, but has the disadvantage that it requires a memory. Moreover, it cannot handle isolated pixel values that appear intermittently or only in certain cases. A good example, is a so-called dark current pixel. Such pixels will appear white in a dark environment, but will behave normal in a bright environment.
Other ways to cancel isolated pixels faults have been proposed, e.g. the spatial median filter or other types of Kalman filters can be used to remove such isolated faults. Unfortunately, such filters do also remove useful detail from the image. Consider the image of a star covered sky with an image sensor that has some faulty pixels that appear white. The quoted filters are not able to remove the white point due to faults, and leave the white points that are stars untouched.