High dynamic range (“HDR”) images are becoming increasingly common in a number of applications. A HDR image of a scene can be generated by conventional computer graphics techniques. A HDR image of a scene can also be constructed from a series of photographs taken by an image capture device of the scene at different shutter speeds or under different lighting conditions. Typically, the various photographs taken at different shutter speeds, for example, will reveal different details of the objects in the scene. Using one or more of several known techniques, the photographs can be combined to provide a single image that can contain details of the various objects in the scene that may be visible in one or more, but not necessarily all, of the photographs, and which has a relatively high dynamic range of luminance, that is, the brightness at respective points in the image. It is not unreasonable to expect that, in the not too distant future, image capture devices may be developed that have the capability of capturing HDR images.
A problem arises with such HDR images, since conventional computer video display or other output devices, such as monitors, printers, and the like have a much lower dynamic range than the HDR images. For example, using one or more of various techniques, it is possible to produce an HDR image with a dynamic range of on the order of 25,000:1, whereas the dynamic range of a typical display monitor is on the order of 100:1. Accordingly, in order to display such an HDR image on a conventional output device, it would be necessary to compress the dynamic range of the image to accommodate the capabilities of the particular output device or devices that are to be used for outputting the image.
Several methodologies have been proposed to reduce the dynamic range of an HDR image to allow the image to be accommodated by conventional output devices. Previous methodologies can be generally divided into two broad groups, namely,
(i) global, or spatially invariant, mappings, and
(ii) adaptive, or spatially variant, mappings.
Spatially invariant mapping methodologies generally map the luminance values such that two pixels in an HDR image that have the same value, are mapped to the same value for use with the output device. On the other hand, adaptive mapping methodologies, may provide different values based on the characteristics of the HDR image. In both cases, the methodologies also take into account the characteristics of the particular target output device or devices that will be used for the resulting image, which will be referred to as a low-dynamic-range (“LDR”) image.
Spatially invariant mapping methodologies are typically simpler to implement since, once a mapping has been developed using the global characteristics of the HDR image and the characteristics of the target output device(s), the LDR image can generally be generated using, for example, look-up tables. Several spatially invariant mapping methodologies have been developed. Some methodologies scale, either in a linear manner or a non-linear manner, the dynamic range of the HDR image to provide the LDR image. Linear scaling is relatively simple to carry out, and it preserves relative contrasts in a perfect fashion, but a severe loss of visibility of elements of the image can occur, particularly if the output device's dynamic range is significantly lower than the dynamic range of the image. Other spatially invariant mapping methodologies make use of histograms or gamma correction to develop mappings.
Several spatially variant mapping methodologies, of varying degrees of complexity, have been developed. Typically, in a spatially variant mapping methodology, it is assumed that, each point in an image I of a scene can be represented by the product of a reflectance function R and an illuminance function L. The reflectance function for the image is commonly referred to as the intrinsic image of the scene. The largest luminance variations in an HDR image come from the illuminance function, since in the real world reflectances are unlikely to create large contrasts. Thus, in principle the LDR image can be generated by separating the image I into its R and L components, compressing the L component to provide a compressed illuminance L′, and generating the LDR image I′ as the product of R and L′. In principle, this should provide an LDR image in which contrasts between highly illuminated areas and areas in deep shadows in the HDR image are reduced, while leaving contrasts due to texture and reflectance undistorted.
Problems arise in the spatially invariant mapping methodologies as described above, since separating the image I into the reflectance R and illumination L components is an ill posed problem. To accommodate that, typically some simplifying assumptions are used regarding R, L or both. In accordance with one assumption, it is assumed that the illumination function L varies slowly across an image, in comparison to the reflectance function R, which can vary abruptly. Under that assumption, the reflectance function L and the illumination function L can be separated by initially taking the logarithm of the image I. It will be appreciated that the logarithm of the image I is the sum of the logarithms of R and L. The sum can be low-pass filtered, with the low frequencies defining the logarithm of L and the high frequencies defining the logarithm of R. The logarithm of L and the logarithm of R can be separately exponentiated, to provide the assumed illumination and reflectance functions L and R, which can be processed as described above. Alternatively, subjecting the logarithm of I to high-pass filtering, followed by exponentiation, can achieve simultaneous dynamic range compression and local contrast enhancement.
The methodology described in the preceding paragraph works well under some circumstances. However, in a number of circumstances, the assumption that the illumination L varies only slowly is violated. For example, in sunlit scenes in which a shadow is cast, the illumination function L varies abruptly across the shadow boundary. In that case, the illumination function L would also have high frequencies, in which case attenuating only the low frequencies may give rise to various “halo” artifacts in the resulting LDR image around abrupt changes in illumination. Various methodologies have been proposed to vary the methodology to eliminate the halo effects, but many of them are computationally intensive, or they do not totally eliminate the haloing, or they may result in other artifacts being generated.