1. Field of the Invention
The present invention relates generally to image mosaicing, and more particularly to methods and systems for using image mosaicing to enhance the dynamic range of images and/or to determine additional characteristics of radiation signals received from scenes.
2. Description of the Related Art
Major limitations of imaging systems (e.g., cameras) include limited field of view, limited dynamic range, limited spectral (e.g., color) resolution, and limited depth of field (i.e., limited range of distances at which scene points remain are adequately in focus in the image plane). In addition, conventional imaging systems typically measure only the intensity of incoming light as a function of the direction from which the light is received, and are unable to measure other characteristics such as depth (e.g., the distance of objects from the camera), and the polarization state of the light—which would be useful for remote recognition of materials, shapes and illumination conditions, and for the analysis of reflections. Furthermore, the quality of the measurements made by conventional cameras tends to be relatively low. For example, in typical CCD cameras, the intensity definition has only 8 bits, and the spectral definition is very poor, consisting of only three broad band channels—typically red, green, and blue channels.
Even when attempts have been made to overcome the above-described limitations, the resulting system has been complex, and has addressed only a narrow problem, while ignoring the other limitations. For example, imaging spectrometers provide high resolution in the spectral dimension, but do not extend the intensity dynamic range of the sensor.
A common way to obtain images having a large field of view without compromising spatial resolution is by using “image mosaics.” Such technique involves combining smaller images, each of which covers a different view of the scene, to obtain a larger image having a wider field of view. The method has been used in various scientific fields such as radio astronomy, remote sensing by synthetic aperture radar (SAR), optical observational astronomy, and remote optical sensing of the Earth and other objects in the solar system. Recently, algorithms have been developed to cope with arbitrary camera motions, and such algorithms have enabled image mosaics to be used in video cameras. In regions where the smaller, component images overlap, the raw data can be processed to enhance its spatial resolution. However, conventional image mosaic techniques are unable to enhance resolution (e.g., dynamic range) with regard to the spectrum, polarization, and brightness of each pixel. Depth is recoverable from image mosaics if parallax is introduced into a sequence of images. However, parallax methods are usually less robust and more complex than methods which estimate depth using focus/defocus cues.
Nonlinear detectors have been used to extend the optical dynamic range of images. For example, CMOS detectors have been manufactured which: (1) yield an electronic output signal which is logarithmic with respect to light intensity, or (2) combine two images having different integration times. The intensity dynamic ranges of such sensors tend to be on the order of 1:106, which enables unsaturated detection of large (i.e., high irradiance) signals. However, the intensity information in such a device is compressed, because in order to sample (sparsely) the high intensity range, the detector uses quantization levels which would otherwise be dedicated to the lower intensities. Thus, the output still has only 8-12 bits of intensity resolution.
Nonlinear transmittance hardware which has a lower transmittance for higher light intensities can extend the dynamic range of any given detector. However, the intensity dynamic range is still quantized according to the limited definition of the detector—i.e., the 8 bits of definition in an ordinary CCD are simply nonlinearly stretched to cover a higher irradiance range. Consequently, the nonlinear compression sacrifices resolution in the lower intensity range.
Automatic gain control (AGC) is common in video and digital cameras, and is analogous to automatic exposures in still-image cameras. However, a major drawback of AGC is that its effect is global, and as a result, the gain setting is likely to be too high for some portions of the image, yet too low for other portions. For example, a bright point is likely to be saturated if it is within a relatively dark image, and a dim point is likely to be too dark for proper detection if it is within a relatively bright image. Image mosaics can be constructed from sequences in which AGC adaptively changes the sensor gain as the scene is scanned. However, although some enhancement of dynamic range has been achieved by this technique, such methods still suffer from an inability to properly measure bright points in mostly dark images, and dark points in mostly bright images.
Mounting spatially varying optical filters on a camera is a common practice in amateur and professional photography. However, such filters have primarily been used to alter raw images to produce special visual effects. Such filters have not been used in connection with resolution enhancement algorithms.
It has been proposed that the dynamic range of each pixel of an image can be enhanced by using a set of multiple, differently exposed images. One such method involves estimating, for each pixel, the value that best agrees with the data from the multiple samples of the pixel. Another approach is to select, for each pixel, the value that maximizes the local contrast. However, such approaches use a stationary camera to capture the sequence of images, and consequently, provide no enlargement of the field of view.
An additional approach uses a mosaic array of small filters which cover the detector array of the imager. Each filter covers a particular detector pixel. The result is a spatially inhomogeneous mask which modulates the light impinging on the detector. In order to extend the intensity dynamic range, the sensor array can be covered with a spatial mosaic array of neutral (i.e., color independent) density filters. However, such a configuration sacrifices spatial resolution in order to extend the dynamic range. Spectral information can be obtained by covering the detector with a mosaic array of color filters. However, such a configuration sacrifices spatial resolution in order to obtain some spectral resolution (i.e., color information). In addition, a detector can be covered with a mosaic of linear polarizers oriented in various different directions. However, such a configuration sacrifices spatial resolution for the polarization information.
High resolution spectral filtering has been obtained by covering a detector array with a spatially varying spectral filter—i.e., a filter having a spectral passband which changes across the vertical and/or horizontal viewing angle of the detector. In such a system, different points in the field of view are filtered differently. The spectrum at each point is obtained by scanning the camera's field of view across the scene. However, placing the filter directly on the detector array reduces the flexibility of the system by making it difficult to change the effective characteristics of the spectral filtering or to measure other properties of the light received from the scene.
If the scene is scanned line by line with a linear scanner, spatial resolution is not sacrificed to obtain spectral information. For example, in trilinear scanners, each linear portion of the image is sensed consecutively with red, green, and blue filters. Pushbroom cameras, which are often used in remote sensing work, operate similarly; each scene line is diffracted by a dispersive element onto a 2D detector array, and as a result, each line is simultaneously measured in multiple spectral channels. However such scanners and pushbrooms are limited to one-dimensional (1-D) scanning at constant speed. Furthermore, an image formed by such a system is not foveated; the entire image is scanned using the same detector characteristics. Accordingly, to capture a significant field of view, numerous acquisitions need to be taken, because each acquisition captures only a 1-pixel wide column.
Images have been captured with different focus settings, and then combined to generate an image with a large depth of field. An approach using a tilted sensor has succeeded in capturing all scene points in focus while extending the field of view. However, this approach does not enhance the dynamic range of the image.
It is common practice in optices to revolve spatially varying choppers and reticles in front of, or within, an imaging system. However, such systems require the imager to have additional internal or external parts which move during image acquisition.