There have been many attempts to provide enhanced vision for situations or environments in which visible light is restricted, such as at night or in fog or haze. One approach is to convert an image formed by electromagnetic radiation at a non-visible frequency such as infrared into a visible representation of the image and combine the representation with a normally visible image of the same scene by superposition to enhance the latter. Such techniques are used for night vision, where visible light is very low, and show promise for enhanced vision in fog or haze, which absorb or scatter visible light. Infrared radiation is particularly promising for such enhanced vision, since it is emitted as well as reflected by bodies and since there are several wavelength bands within the infrared (IR) spectrum which are subject to significantly less absorption by atmospheric moisture such as haze and fog.
Unfortunately, the practical application of such techniques has been limited by the problem of parallax. The enhanced image has been provided by separately collecting and processing electromagnetic radiation from a scene in visible and non-visible wavelength bands and combining the visible and non-visible image information into a single enhanced visible image. However, the separate sensors for visible and invisible electromagnetic radiation have not provided the same view of the scene; and it has thus been difficult to combine the outputs of the sensors into an accurate, enhanced visible image.