Imagers, such as for example CCD, CMOS and others, are widely used in imaging applications including digital still and video cameras.
It is well known that, for a given optical lens used with a digital still or video camera, the pixels of the pixel array will generally have varying signal values even if the imaged scene is uniform. The varying responsiveness depends on a pixel's spatial location within the pixel array. One source of such variation is lens shading. Roughly, lens shading causes pixels in a pixel array located farther away from the center of the pixel array to have a lower value when compared to pixels located closer to the center of the pixel array, when the camera is exposed a spatially uniform level of light stimulus. Other sources may also contribute to variations in a pixel value with spatial location. These variations can be compensated for by adjusting, for example, the gain applied to the pixel values based on spatial location in a pixel array. For lens shading correction, for example, it may happen that the further away a pixel is from the center of the pixel array, the more gain is needed to be applied to the pixel value. Different color channels may also be affected differently by various sources of shading. In addition, sometimes an optical lens is not centered with respect to the optical center of the imager; the effect is that lens shading may not be centered at the center of the imager pixel array. Each color channel may also have a different center, i.e., the pixel with the highest response.
Variations in the shape and orientation of photosensors used in the pixels may also contribute to a non-uniform spatial response across the pixel array. Further, spatial non-uniformity may be caused by optical crosstalk or other interactions among the pixels in a pixel array. Further, examples of how changes in optical states can change the spatial pattern of non-uniformity include variations in iris opening or focus position, each of which may affect a pixel value depending on spatial location.
Variations in a pixel signal caused by the spatial position of a pixel in a pixel array can be measured and the pixel response value can be corrected with a pixel value gain adjustment. Lens shading, for example, can be corrected using a set of positional gain adjustment values, which adjust pixel values in post-image capture processing. With reference to positional gain adjustment to correct for shading variations with a fixed optical state configuration, gain adjustments across the pixel array can typically be provided as pixel signal correction values, one corresponding to each of the pixels. The set of pixel correction values for the entire pixel array forms a gain adjustment surface for each of a plurality of color channels. For color sensors, the gain adjustment surface is applied to the pixels of the corresponding color channel during post-image capture processing to correct for variations in pixel value due to the spatial location of the pixels in the pixel array. For monochrome sensors, a single gain adjustment surface is applied to all the pixels of the pixel array.
When a gain adjustment surface is calculated for a specific color channel/camera/lens/IR-out filter, etc. combination, it is generally applied to all captured images from an imager having that combination. This does not present a particular problem when a camera has a single only optical state. Cameras with varying optical states however, will generally need different lens shading and other pixel correction values for each color channel at each different optical state. These varying corrections cannot be accurately implemented using a single gain adjustment surface for each color channel. Accordingly, it would be beneficial to have a variety of gain adjustments available, for each color channel for each of the varying optical states, to correct for the different patterns of pixel value spatial variation at the different optical states.
It may be possible to address the problem of different focal lengths of a lens by storing a relatively large number of sets of gain adjustment surfaces, each set corresponding to one of the many possible optical states of a given lens, and each set containing an adjustment surface for each color channel. The storage overhead, however, would be large and a large amount of retrieval time, energy and power would be consumed when zoom lens position and or other optical state changes, for example, during video image capture, as each gain adjustment surface is first determined/retrieved and then applied to the captured image.
Accordingly, methods, apparatuses and systems providing spatial pixel gain adjustments (“positional gain adjustment”) and other spatially-varying adjustments for use with pixel values of images captured using cameras using multiple optical states are desirable.
One solution, as described in copending application Ser. No. 11/798,281, entitled METHODS, APPARATUSES AND SYSTEMS PROVIDING PIXEL VALUE ADJUSTMENT FOR IMAGES PRODUCED WITH VARYING FOCAL LENGTH LENSES, filed May 11, 2007 (the '281 application) the entirety of which is incorporated herein by reference, is to store a fixed number of adjustment surfaces corresponding to specific focal lengths of a zoom lens and then interpolate or extrapolate an adjustment surface for each color channel from stored surfaces for a focal length which does not have a corresponding stored surface. However, even though the number of stored surfaces is less than the number of possible focal lengths, a relatively large memory storage may be required which may be a problem in some designs, in order to generate surfaces which sufficiently approximate the ideal, desired correction surfaces with sufficient accuracy.
Accordingly, improved methods, apparatuses and systems providing better-fitting adjustment surfaces given a certain amount of available storage capacity are desired. Variations in the required adjustments caused by, e.g., changes in focal length, iris opening and focus position can be corrected using disclosed embodiments.