Solid-state image sensors have found widespread use in camera systems. The solid-state imager sensors in some camera systems are composed of a matrix of photosensitive elements in series with switching and amplifying elements. The photosensitive sensitive elements may be, for example, photoreceptors, photo-diodes, phototransistors, CCD gate, or alike. Each photosensitive element receives an image of a portion of a scene being imaged. A photosensitive element along with its accompanying electronics is called a picture element or pixel. The image obtaining photosensitive elements produce an electrical signal indicative of the light intensity of the image. The electrical signal of a photosensitive element is typically a current, which is proportional to the amount of electromagnetic radiation (light) falling onto that photosensitive element.
Of the image sensors implemented in a CMOS- or MOS-technology, image sensors with passive pixels and image sensors with active pixels are distinguished. The difference between these two types of pixel structures is that an active pixel amplifies the charge that is collect on its photosensitive element. A passive pixel does not perform signal amplification and requires a charge sensitive amplifier that is not integrated in the pixel.
One of the more important specifications of an image sensor is the cosmetic quality. A sensor's image should be ideally flawless. Unfortunately, image sensor technology is not perfect. Due to processing imperfections, statistics, etc., a finite number of pixels in a sensor array will be defective or yield a signal that deviates visibly from the exact pixel value. Such faults appear as white or black or gray points in the image. This type of pixel fault is referred to as an isolated defect pixel. For a human observer, these tend to be much more annoying than other image imperfections as temporal noise, a mild fixed pattern, or imperfect registrations of color or gray values.
One method to cancel these spots is to store a list of defective pixels and of their positions in the image in a memory of the image sensor. In an image processing step, the isolated pixel value is then replaced by, for example, the average of the surrounding pixels stored in the memory. This method is viable, but has the disadvantage that it requires a memory in the image sensor that would require additional silicon area and added expense. Moreover, it cannot handle isolated pixel values that appear intermittently or only in certain cases. A good example is a so-called dark current pixel. Such pixels will appear when the sensor is at elevated temperatures, yet behave normal at lower temperatures.
Other methods to cancel isolated pixels faults have been proposed, e.g., the spatial median filter or other types of Kalman filters can be used to remove such isolated faults. Unfortunately, such filters also remove useful detail from the image. Consider the image of a star covered sky with an image sensor that has some faulty pixels that appear white. The above noted filters are not able to remove the white point due to faults, and leave the white points that are stars untouched.
Another conventional way to correct for isolated defect pixels in a black and white sensors is described in B. Dierickx, G. Meynants, “Missing pixel correction Algorithm for image sensors,” AFPAEC Euroopto/SPIE, Zurich 18-21 may 1998; proc. SPIE vol. 3410, pp. 200-203, 1998 and WO 99/16238. The missing pixel correction algorithm described therein is, in essence, a small kernel non-linear filter that is based on the prediction of the allowed range of gray values for a pixel, from the gray values of the neighborhood of that pixel. One difficulty with such an algorithm is that it may not be suitable for use with mosaic color image sensors because the algorithm may not be able to distinguish between defect pixels and pixels with a deviation response due to the color of the scene.
In a “raw” color image sensor's image, each pixel yields only one color component (red, green or blue). The process to generate all color components for each pixel in a color image is a reconstruction process called demosaicing. In a demosaicing process, the color information in the defective pixel is interpolated to get a complete color image. One problem with conventional demosaicing processes is that the information of the defect pixels is spreading to become false color information in the neighboring pixels.