Electronic device with integrated image system(s) for capturing still and/or motion images (video), such as mobile phone, smart phone, wearable gadget, tablet computer, portable computer, hand-held computer, digital camera, digital camcorder, navigator, interactive toy, game console, remote conference terminal, surveillance system, etc., has become prevailing, popular and essential for contemporary information society. To meet demands for light-weight and compactness, CMOS sensors and tiny optical (lenses) modules are adopted to form embedded image systems. However, such image system suffers color artifacts, e.g., lens color shading (or color shading in brief).
Color shading is a well-known color artifact that exists in digital image systems, and is particularly prominent in images captured by CMOS sensors. When an image system captures an image of a uniformly illuminated gray wall, it is expected to obtain a uniformly gray image as well. However, the captured image will show both intensity non-uniformity and color non-uniformity from the image center to image corners and edges. Typically, intensity non-uniformity causes a captured image to be darker at the image edges and corners than at the image center. This is owing to off-axis geometric factors in image formation, where light is attenuated toward image edges.
Furthermore, color non-uniformity also exists in the captured image. The hue and the chroma of the image color gradually change outward from image center, but do not necessarily follow a radial symmetric pattern, nor attenuate at a constant fall-off rate. The artifact of such color non-uniformity is referred to as color shading. Color shading severely degrades image color quality.
Please refer to FIG. 1 illustrating color shading of different colors. As shown in a 3D surface plot of FIG. 1, a green channel (a distribution of green component of each pixel) of a captured image suffers non-uniformity which falls off at edges and corners of an image, even the image is supposed to be uniform. FIG. 1 also includes a cross-sectional view to demonstrate non-uniformities of green, blue and red channels. It is noted that different color channels suffer different non-uniformities. Comparing to the green and blue channels, the red channel experiences a more serious fall-off toward edges.
The overall gradual attenuation at locations away from image center is considered as vignetting or luminance falloff. Different fall-offs of different color channels result in color distortion in image. In order to correct overall fall-off and color shading artifacts, additional gains may be applied to the color channels by image signal processing (ISP) to compensate the fall-off uniformities. Since the fall-offs can be different between any two color channels at every spatial location, the correction gains should compensate fall-offs between color channels.
Several factors contribute to formation of color shading, including lens vignetting, pixel vignetting, pixel cross-talk and property of an IR-cut filter, etc. Lens vignetting is an inherent optical property of optical lens (or lenses). Light passing through optical lens falls off from center to peripherals of lens. Theoretical analysis shows that Image irradiance decreases at a rate of cosine to the fourth power, though real lenses may not exactly follow the cosine to the fourth power law. Lens vignetting, i.e. lens fall-off effect, is a strong contributor of intensity non-uniformity.
Another factor of color shading is pixel vignetting, i.e., irradiance fall-off due to pixel structure. Pixel vignetting can be affected by many Causes, such as pixel layout design and structure of micro-lens.
Pixel cross-talk also contributes to color shading. Pixel cross-talk is a phenomenon that a signal of a pixel is erroneously picked up by its adjacent pixels, normally at a different color channel. Pixel cross-talk may include optical cross-talk and electronic cross-talk. Their end effects are similar that signal from one color channel can leak into another color channel. The cross-talk may be mutual.
In order to shield away IR light, IR-cut filters are used in image system. For some types of IR-cut filters, however, the effective cutoff frequency is not a constant in terms of light incident angle. There is a wavelength shift from lens center to lens peripherals. More light may be blocked at lens peripherals than at lens center, if wavelength of the light is near the IR-cut filter cutoff frequencies. As a result, red channel signal can be further attenuated at image corners and edges than green and blue channel signals.
It is well observed that characteristics of color shading are sensitive to type of illuminants (light sources); to be more specifically, spectral composition of incident light. That is, when capturing images under different illuminants, the resultant images will suffer different non-uniformities, even when scenes being captured are the same. Different types of lens-sensor combinations may also cause color shading of different characteristics. Furthermore, due to manufacturing variations, characteristics of color shading are different for different units in the same batch, even under the same illuminant. Therefore, color shading correction becomes essential for the image system.
Conventionally, color shading correction is achieved through complex calibration. In order to calibrate color shading of an image system, the image system is exposed to a uniformly illuminated field, then images are captured under several different kinds of illuminants, so a same number of sets of fall-off maps are obtained by directly measuring non-uniformities of the captured images. Each set of fall-off maps corresponds to one kind of illuminant, including fall-off maps of different color channels. Accordingly, a shading correction map can be prepared to reverse non-uniformity under the corresponding kind of illuminant. After calibration, the shading correction maps corresponding to different types of illuminants are stored in the image system. When capturing an image, the conventional image system first attempts to identify which kind of ambient illuminant is currently exposed, and accordingly selects a corresponding shading correction map to correct color shading of the captured image.
Typically, it is expected that, if more illuminants are calibrated to provide more shading correction maps, a scene has a better chance to be covered by one of the shading correction maps. However, in reality, color shading calibration is constrained by equipment availability, memory requirement, system resource capacity, time requirement, productivity and/or cost concern. Practically, at production line, reducing required repetition of shading calibrations is desired, and consequently compromises effectiveness of color shading correction. In addition, since characteristics of color shading are distinct for different units in a same production batch, calibration on every individual unit is required. Nevertheless, performing calibration on every unit is expensive.
To correctly identify type of illuminant is also a challenge, because the image system for consumer electronics lacks satisfactory ability to resolve wavelength details of illuminants. Moreover, it is very difficult, if not impossible, to enumerate sufficient kinds of illuminants, since real-world illuminant mixes lights from diverse sources of different distances, different incident angles and/or different intensity, including natural radiant bodies, many kinds of man-made radiant sources, reflectors and/or scatterers.
Incorrect illuminant identification and/or lack of proper shading correction map will cause erroneous color shading correction. For example, the corrected image may become reddish in image center, because the red channel is overcompensated.