1. Field of the Invention
This invention relates to image processing, and more particularly, to automatic bad pixel correction in image sensors.
2. Background
Digital cameras are commonly used today. Typically, a digital camera uses a sensor that converts light into electrical charges. Image sensors consist of an array of detectors and each detector converts light to an electronic signal. Image sensors also include the means to scan 2-D data to an output port of the digital camera.
There are two kinds of image sensors that are currently used in digital cameras. These are based on CCD and CMOS technologies. CCD sensors combine the tasks of light conversion and array transfer in a single device using a MOS Capacitor. CMOS sensors rely on diodes or transistors to convert light into electrical signals and include switches and amplifiers in the pixel to transmit data.
To represent a high quality color image, each pixel in a pixel array must contain tri-stimulus color data (for example, red, green, and blue). Generating such data in a digital camera requires that each light detector location in an image sensor produce data for three colors. Such a system is quite complex and expensive to build. To reduce system complexity, color image sensors commonly used in digital cameras (both video and still) use a mosaic pattern of single colored light detectors, typically red, green and blue. Each detector location produces only one of the three required colors.
FIG. 1 depicts one such mosaic pattern 100, commonly referred to as a “Bayer pattern”. In pattern 100, each detector location collects red (R) 103, green (G) 101, or blue (B) 102 light. Green light detectors (101) are more abundant than red (103) and blue (102) detectors to match the characteristics of human visual system. To create a full color image the two “missing colors” at each location are inferred through an interpolation process using data from neighboring detector locations. This data is further processed through a color reconstruction algorithm to make a presentable picture.
FIG. 3 shows a flow diagram of various process steps (S300 to S309, including S302A) that occur in conventional image processing systems. Based on auto-focus step S302B, an image sensor detects pixel values in step S300. In step S301, a dark current value is subtracted to reduce the noise introduced by the image sensor. In step S302, bad pixel correction is performed by an image-processing unit, and white balancing occurs in step S303. Note that bad pixel correction is applied to Bayer image (FIG. 1) readout after dark current compensation. Also, the correction is applied before color interpolation to avoid spreading bad pixel values during the interpolation process.
In step S304, color interpolation is performed. In step S305, black level flare correction is performed, and then color correction is performed in step S306. In step S307, gamma correction is performed, and edge enhancement is performed in step S308. In step S309, color space conversion occurs and then compression unit 206 compresses image data.
Pixel detectors for CMOS imagers are initially biased in a reverse direction. The carriers produced by photons discharge the reverse bias. Detector voltages after exposure are a measure of the intensity of light incident on a particular detector. All pixels should produce equal output if uniform light is used to illuminate an ideal image sensor. However, in real devices some pixels produce a higher or lower output voltage than the average value produced by the image sensor. Pixels in an image created by the image sensor that appear brighter than the rest are often described as “hot or white” pixels. Pixels whose output is below an average value are often referred to as “cold or dark” pixels. Both hot and cold detectors produce blemishes in image data and are referred to as “Bad Pixels” herein.
Bad pixels can occur in image sensors due to a number of reasons including, but not limited to, point defects in material used to fabricate the pixel array, higher leakage current at a certain pixel, and defects in the readout circuitry.
Conventional techniques for correcting bad pixels rely on developing and storing a bad pixel map for each sensor. Such data maps are collected during production of digital cameras, for each sensor. For example, a bad pixel map can be obtained by spotting the pixels that shine when no light is incident and pixels that do not saturate even if bright light is shined on them for an extended period of time. The combined map is then stored in a non-volatile memory. Data processing elements in a digital camera with this type of circuitry replace bad pixel data by estimates of what it should be. However, one limitation is that the number of locations that can be stored in the memory device limits the number of pixels, which can be corrected in this type of system. In addition, this is an expensive and time-consuming process during production and hence commercially undesirable.
Therefore, there is a need for a system and method for efficiently performing bad pixel correction.