In photography, exposure refers to the amount of light gathered by the capturing device. Different exposure levels thus typically generate different image intensities in that a low-exposed image appears to be dark and a high-exposed image appears to be bright. In some applications, it may be desirable to have images which correspond to different exposures. For example, in high dynamic range (HDR) imaging, images with different exposures can be combined to obtain a HDR image which has a higher dynamic range than is possible with standard imaging or photographic techniques. For this purpose, more than one image has to be acquired with the imaging device, which makes the image acquisition more time-consuming and complicated.
For HDR imaging and other applications there is thus a need for determining an output image from an input image, wherein the output image corresponds to a different exposure than the input image.
A change of exposure can affect, for example, intensities in a gray scale image or color components in a color image. Color consistency between a set of color input images is crucial for a variety of applications in computer graphics and image processing. This is especially the case when the application at hand is based on the assumption that the input images have the same color properties in terms of pixel intensities.
In the following, exposure conversion is any technique in which an input image corresponding to a certain exposure (first exposure) is transformed into an output image corresponding to an exposure (second exposure) different from the first exposure. The input image and the output image may be gray-scale images or color images.
The problem of exposure conversion is quite common in computer vision and computational photography, where multiple images with different exposures are used in order to increase the quality of a photograph, or to detect the geometry of the scene. Typical challenges addressed by exposure conversion are the elimination of the color differences in scenes which differ in content due to camera and/or scene-related motion.
The spectrum of such applications ranges from object tracking and identification using stereo or multi-camera systems, to image and panorama stitching, image retrieval, face and object recognition, pedestrian and car detection, motion estimation and compensation, stereo matching and disparity map computation, inter-frame color consistency in the context of video enhancement. For these applications, color dissimilarity can be caused by varying illumination conditions during the capturing process, different intrinsic parameters (exposure settings, sensor sensitivity) and radiometric properties of the cameras or simply different capturing times. This unintentional color difference is typical for multi-camera systems such as stereo and multi-view setups.
However, in some other scenarios, the nature of the application imposes an inherent radiometric variation between the input images. This is especially the case for high dynamic range imaging (HDRI), where the input Low Dynamic Range (LDR) images are differently exposed, ranging from under-exposure (dark images) to over-exposure (bright with saturated areas). The input LDRs are subsequently merged into one single HDR image with a greater dynamic range. This technique requires the LDRs to be aligned, in order to cover the Camera Response Function (CRF) or perform Exposure Fusion (EF). However, in most cases, motion introduced by the capturing device or the scene itself violates this assumption. This calls for motion compensation, which in turn depends on the initial exposure conversion between the input LDRs.