In many image processing applications, such as geographic survey systems, several image sensors are employed. Images generated by these sensors usually contain data related to a single region of interest. In many instances, the images are received at different wavelengths by one or more sensors.
Such images present abundant multidimensional information which contains several image bands. However, not all such images can be displayed on a standard display. Therefore, these images are fused together to form a single image. The single fused image typically contains important features extracted from all the images that were received.
There are several image fusion techniques that are applied to combine multiple images with varying information into one fused image. One such technique is based on the averaging of images across the different wavelengths or spectral bands. The technique assigns equal weight to each spectral band and produces a result equivalent to integration of a spectral response at each pixel. However, as the information across bands is not uniformly distributed, large amounts of information may be lost during the fusing process.
Another image fusion technique assigns unequal weights to the spectral bands. The weight assigned for each image depends on the application and the purpose of visualization. Specifically, application dependent kernels are used to assign spectral weights to the spectral bands. Since the weights depend on application specific information in the spectral band, the likelihood that useful contents in the images are retained is increased. However, in the techniques described above, certain features such as weak edges, textures and the like can be difficult to retain in the fused image.