Digital cameras and other image capture devices use image sensors that comprise a plurality of sensor elements, commonly known as pixels. Each pixel collects light information from the viewed scene that is to be captured. In cases in which the device is configured to capture color images, each pixel collects light information as to a particular color (e.g., red, green, or blue) from the light that is transmitted to the sensor from the device lens system.
If the image capture device only comprises a single image sensor, as opposed to a separate, dedicated image sensor for each captured color, the light that is transmitted to the sensor is filtered so that each individual pixel only collects information as to a single color. This filtering is typically achieved using a two-dimensional color filter array that is laid over the image sensor.
Most filter arrays comprise a mosaic of color filters that are aligned with the various pixels of the image sensor. The most common filter arrays implement what is known in the art as a Bayer pattern. When a Bayer pattern is used, filtering is provided such that every other pixel collects green light information (i.e., is a “green pixel”) and the pixels of alternating rows of the sensor collect red light information (i.e., are “red pixels”) and blue light information (i.e., are “blue pixels”), respectively, in an alternating fashion with pixels that collect green light information
When the image data is read out from the image sensor, information for each color (e.g., red, green, and blue) that is used to generate a resultant image must be provided for each pixel position. However, in that each pixel only collects information as to one color, the color information for the colors not collected by any given pixel must be estimated so that complete color frames can be obtained for each of the colors used to generate the image. Accordingly, if red, green, and blue are used to generate the image, red and blue light information must be estimated for each green pixel, blue and green light information must be estimated for each red pixel, and red and green light information must be estimated for each blue pixel.
The process of estimating color information in this manner is known as demosaicing and is typically accomplished through application of one or more demosaicing algorithms. Such demosaicing algorithms estimate the missing color information for each given pixel position by evaluating the color information collected by adjacent pixels. For instance, when estimating the red light information for a green pixel, the demosaicing algorithm evaluates red (and potentially blue and green) color information collected by neighboring pixels. Through this process, the missing color information can be interpolated. By way of example, demosaicing may be accomplished by evaluating information collected by pixels within a five-by-five or seven-by-seven matrix of pixels that provide information contained in a “kernel”. Typically, the pixel under consideration is located in the center of this matrix so that information collected from pixels in every direction is obtained. Through this process, the missing color information can be estimated so that complete color frames may be obtained.
Such demosaicing algorithms are applied under the assumption that the lens system that transmits light to the image sensor is ideal. In reality, however, lens systems introduce error caused by lens aberrations. Such aberrations may comprise, for example, spherical, geometric, astigmatic, radial, axial, and chromatic aberrations. Although lens designers strive to compensate for, and therefore nullify the effects of, such aberrations, not all of the aberrations can be completely corrected at the same time. In particular, reducing aberrations inherently increases the complexity of the lens design, which increases its cost and size to implement in an imaging system. Therefore, some form of aberration is normally always present.
Because demosaicing algorithms are not designed to account for such aberrations, less than ideal images can result. One example is the effect of lateral chromatic aberration. The term “lateral chromatic aberration” describes the phenomenon in which different colors are magnified by different degrees by the lens system. This causes the various color components (e.g., red, blue, and green) to be shifted in relation to each other in a degree that increases as a function of distance away from the center of the lens, and therefore away from the center of the image.
An example of such color shifting is illustrated in FIG. 1, which shows an image 100 that contains an image of an object in the form of a white ellipse 102. As indicated in FIG. 1, color fringes, in this case a blue fringe 104 and a red fringe 106 (color not indicated in FIG. 1), are generated that outline the ellipse 102 as a result of the red, green, and blue light information used to create the image of the ellipse being magnified to different extents such that the colors do not precisely overlap each other. This shifting results in perceived color fringing and blurring of the captured image.