An emerging technology in the field of digital photography is High Dynamic Range Imaging (HDRI). HDRI provides for capturing most of actual world luminance, making it possible to reproduce a picture as close as possible to reality when using appropriate displays. High dynamic range imaging thus provides a representation of scenes with values commensurating with real-world light levels. The real world produces a twelve order of magnitude range of light intensity variation, which is much greater than the three orders of magnitude common in current digital imaging. The range of values that each pixel can currently represent in a digital image is typically 256 values per color channel (with a maximum of 65536 values), which is not suitable for representing many scenes. With HDR images, scenes can be captured with a range of light intensities representative of the scene and range of values matched to the limits of human vision, rather than matched to any display device. Images suitable for display with current display technology are called Low Dynamic Range (LDR) images. The visual quality of high dynamic range images is much better than that of conventional low dynamic range images. HDR images are different from LDR images regarding the capture, storage, processing, and display of such images, and are rapidly gaining wide acceptance in photography.
As use of HDRI spreads in the field of digital photography, there is a growing need for HDRI displays capable of displaying both still images and videos. This represents a significant shift in display quality over traditional displays. However, since the existing media is not of High Dynamic Range (HDR), the utility of HDRI displays is limited to newly acquired HDR images using HDRI sensors. Existing solutions to convert existing Low Dynamic Range (LDR) images into equivalent HDR images is commonly known as “reverse tone mapping”. Reverse tone mapping generally requires two phases. A first phase is performed to inverse map the luminance of an input LDR image into an expanded HDR luminance (also called HDR radiance). Due to image quantization, this phase results in loss of details and introduces noise in the region of high luminance. The second phase remediates to this defect by smoothing such regions while also allowing for potentially further increasing the dynamic range.
One known solution to perform the first phase is the approach taken in the article by Rempel A. G., Trentacoste M., Seetzen H., Young H. D., Heidrich W., Whitehead L., and Ward G., entitled “Ldr2Hdr: on-the-fly reverse tone mapping of legacy video and photographs”, ACM SIGGRAPH 2007 Papers (San Diego, Calif., Aug. 5-9, 2007). This approach relies on a fast inverse method that is suitable for real-time video processing. According to this approach, inverse gamma mapping is performed and then the dynamic range is extended to 5000. Further, smooth filters are performed to decrease the effect of quantization.
Another solution to perform the first phase of reverse tone mapping is described in the article entitled “Inverse tone mapping”, Proceedings of the 4th international Conference on Computer Graphics and interactive Techniques in Australasia and Southeast Asia (Kuala Lumpur, Malaysia, Nov. 29-Dec. 2, 2006), GRAPHITE '06, ACM, New York, N.Y., 349-356 by Banterle F., Ledda P., Debattista K., and Chalmers A. This solution uses an inverse mapping function that is based on a global tone mapping operator, previously described by Reinhard E., Stark M., Shirley P., and Ferwerda J., in an article entitled “Photographic tone reproduction for digital images”, ACM Trans. Graph. 21, 3 (July 2002), 267-276. Inverse values are then obtained by solving quadratic equation, generating thereby a considerably larger dynamic range and shrink the range selectively at certain pixels. However, these existing solutions provide an inverse tone mapping function for the first phase that is not accurate enough. The obtained radiance does not exactly match with real-world radiance due to the “generic” inverse mapping function. That roughly approximates real-world radiance values.
There exist two different approaches to perform the second phase of reverse tone mapping. The first approach described by Rempel et al, in the article entitled “Ldr2Hdr: on-the-fly reverse tone mapping of legacy video and photographs”, ACM SIGGRAPH 2007 Papers (San Diego, Calif., Aug. 5-9, 2007), generates a Gaussian mask over pixels surpassing a high value. Moreover, this approach uses an ‘Edge-stopping’ function to improve local contrasts at edges. The resultant brightness function is used to extend lighting considerably. A more complex technique is the one described in Banterle et al., “Inverse tone mapping”, Proceedings of the 4th international Conference on Computer Graphics and interactive Techniques in Australasia and Southeast Asia (Kuala Lumpur, Malaysia, Nov. 29-Dec. 2, 2006), GRAPHITE '06, ACM, New York, N.Y., 349-356. This second approach includes the segmentation of the input image with regions of equal light intensities, using a median cut algorithm (Debevec P., “A median cut algorithm for light probe sampling”, in ACM SIGGRAPH 2006 Courses (Boston, Mass., Jul. 30-Aug. 3, 2006), SIGGRAPH '06, ACM, New York, N.Y., 6). The centriods of those regions are used to estimate light densities and to construct an “expand” map. The map is then used to generate the final HDR image by guiding an interpolation operation between the input LDR and the inverse mapped LDR image. These solutions for the second phase of the reverse tone mapping rely on finding pixels with high luminance values and use that to expand the dynamic range of those pixels. However, such extrapolation only happens to extend the luminance of the hotspots (highlights) and nearby regions, and never decrease the luminance in dark regions (shades). Accordingly, they effectively perform one-sided dynamic range extension using local operation (the shades are globally expanded), thereby affecting the quality of shaded regions in the resultant HDR image.