The present invention relates to digital cameras and similar devices, and more particularly, to an improved method for converting data from a camera sensor to a color image.
A digital color image usually consists of an array of pixel values representing the intensity of the image at each point on a regular grid. Typically, three colors are used to generate the image. At each point on the grid the intensity of each of these colors is specified, thereby specifying both the intensity and color of the image at that grid point.
Conventional color photography records the relevant image data by utilizing three overlapping color sensing layers having sensitivities in different regions of the spectrum (usually red, green, and blue). Digital cameras and scanners, in contrast, typically utilize one array of sensors in a single xe2x80x9clayerxe2x80x9d.
When only one sensor array is used to detect color images, only one color may be detected at any given sensor location. As a result, these sensors do not produce a color image in the traditional sense, but rather a collection of individual color samples, which depend upon the assignment of color filters to individual sensors. This assignment is referred to as the color filter array (CFA) or the color mosaic pattern. To produce a true color image, with a full set of color samples (usually red, green and blue) at each sampling location, a substantial amount of computation is required to estimate the missing information, since only a single color was originally sensed at each location in the array. This operation is typically referred to as xe2x80x9cdemosaicingxe2x80x9d.
To generate the missing information, information from neighboring pixels in the image sensor must be used. In addition, some assumption must be utilized about the structure of the underlying image, since there are an infinite number of possible images that could have generated the measured color values. Typically, it is assumed that the underlying image is smooth, and an interpolation algorithm is then utilized to compute the missing color values from the neighboring measured color values.
While most images of interest to human observers are mainly smooth, the smoothness assumption is not satisfied along edges of objects and in textured regions of the image. In these regions, images generated by interpolation using the smoothness assumption show a loss of resolution. In addition, algorithms that treat the red sensors as being independent from the green sensors, and so on, typically generate color artifacts in the reconstructed images. The artifacts are incorporated in the chrominance part of the image and are due to mis-registration of the chrominance components. These artifacts express themselves as streaks of false colors in the restored image, and are especially apparent around boundaries between different objects in the image and in textured areas.
Broadly, it is the object of the present invention to provide an improved image processing method for converting data from a pixel array having non-overlapping sensors to a fully sampled digital image.
It is a further object of the present invention to provide a conversion method that does not generate the color artifacts discussed above.
It is yet another object of the present invention to provide a conversion method which has improved resolution around boundaries between objects and in textured areas.
These and other objects of the present invention will become apparent to those skilled in the art from the following detailed description of the invention and the accompanying drawings.
According to one aspect of the present invention, a method of demosaicing a digital image includes producing a full color image from the digital image; transforming the full color image to a luminance-chrominance color space; smoothing the chrominance components of the transformed image; converting the luminance and chrominance components back to the original color space of the image; resetting certain pixels in the full color image to their measured values; converting the full image back to the luminance-chrominance color space; and further smoothing the chrominance components of the full color image.
According to another aspect of the present invention, a method for interpolating an image includes assigning a value to each unknown pixel value of the image; and filtering a luminance component of said image by decomposing said luminance component into a plurality of component images, each component image representing information in said image at different levels of scale; applying a low-pass spatial filter to each of said component images, said low-pass filter having an anisotropy that varies with location in said component image; and combining said filtered component images to regenerate a new luminance component.
According to yet another aspect of the present invention, a method for interpolating an image includes (a) filling in missing pixel values; (b) transforming the image to luminance-chrominance space; (c) filtering luminance values of said transformed image with a low-pass filter having an anisotropy that varies with position in said image; (d) filtering chrominance values of said transformed image with an isotropic filter; (e) transforming the transformed image back to its original color space; (f) resetting each measured pixel value to said known value; and (g) repeating steps (b)-(f).