There are many sources of noise which may degrade an image of a scene. For example an image of a scene will often be degraded by optical scattering of light caused, for example by fog or mist. This optical scattering results in additional lightness being present in some parts of the image, and has been referred to as “airtight” in the literature. It is desirable to process an image so as to remove components of pixel values which are attributable to airtight.
If the distance between a camera position and all points of a scene represented by an image generated by the camera is approximately constant, airtight can be estimated and removed by applying equation (1) to each pixel of the image:y=m(x−c)  (1)where:                x is an original pixel value;        c is a correction selected to represent “airlight”;        m is a scaling parameter; and        y is a modified pixel value.        
Assuming that parameter c is correctly chosen, processing each pixel of a monochrome image in accordance with equation (1) will enhance an image by removing air light. However determination of an appropriate value for the parameter c is often problematic.
Various known methods exist for estimating the parameter c by using contrast measurements such as a ratio of the standard deviation of pixel values to the mean of pixel values. However, such contrast measures do not discriminate between airlight-induced contrast loss and inherently low contrast scenes. For example an image of sand dunes would often provide little contrast between the light and dark parts of the scene, even when no airtight is present. Thus ad-hoc schemes to determine the parameter c will sometimes result in severe image distortion.
The method described above with reference to equation (1) is applicable to a monochrome image. Further problems arise if a colour image is to be processed. Typically the airtight contribution to a pixel value, and hence the value of the parameter c, will depend upon the wavelength (colour) of the light. Thus, if equation (1) is to be applied to colour images, different values of the parameter c may be needed for red, green and blue channels of the image.
The methods described above assume that the camera position is equidistant from all points in a scene represented by an image. Published European Patent EP 0839361 describes a method developed by one of the present inventors, in which different values of the parameter c in equation (1) are used for different pixels, in dependence upon distance between the camera position and a position in a scene represented by that pixel. This invention arose from a realisation that backscattered light may vary in dependence upon the distance between a camera position and a position in a scene.