Many computer vision algorithms rely on the assumption that image intensities are linearly related to the image irradiance recorded at the camera sensor. Since most cameras non-linearly alter irradiance values for purposes such as dynamic range compression, this assumption generally does not hold. It is therefore important to calibrate the response function of the camera so that the non-linear mapping can be inverted and subsequent algorithms can assume linearity of intensity observations.
Radiometric calibration aims to estimate the response function ƒ of a camera. The radiometric response function ƒ maps irradiance I that is captured at the sensor to the image intensity M that is read from the camera:M=f(I)For vision algorithms and the like that require irradiance values I rather than measured intensity M as input, the inverse response function g=ƒ−1 needs to be determined so that measured intensities can be made linear with respect to irradiances. Since response functions ƒ are typically monotonic, they tend to be invertible.
Many conventional methods for estimating a camera response function require as input an image sequence taken with varying exposures from a fixed camera. A few methods allow some camera movement or scene motion, but still require changes in exposure level. But in many applications such as those of web cameras, multiple images at different exposures cannot be obtained for radiometric calibration. Accordingly, some previous methods have been proposed without the need to make adjustments in camera exposure settings. But such methods may require assumptions about the radiometric response function of a camera that are often invalid. Other previous methods may rely on statistical distributions of irradiance but may be susceptible to image noise. Previous methods of radiometric calibration tend to be degraded by imaging noise, particularly by high noise levels.