A multispectral image stores an intensity vector for each pixel of the image. The intensity vector has an intensity value representing radiance for each frequency for each spectral band in a spectrum. A multispectral image may be captured by a multispectral instrument that has a slit for receiving a certain number of pixels across the instrument and that is moved over an area to be imaged. For example, a multispectral image of a land area may be collected by a multispectral instrument that is mounted on an airplane that flies over the land area in a direction perpendicular to the slit. Such an instrument may include a prism that splits the electromagnetic wave for each pixel into intensity values that represent different frequencies. FIG. 1 illustrates the data representing a multispectral image. The x-axis represents the direction across the instrument, the y-axis represents the direction of travel, and the z-axis represents the intensity vectors. The term “hyperspectral” is generally used to refer to a multispectral image that has more than 100 spectral bands, and some of them can even have more than 256 spectral bands. Because of the large number of spectral bands for each pixel, hyperspectral data “cubes” contain vast amounts of data that require considerable storage and processing resources. Alternatively, a multispectral instrument can be constructed using a Fourier transform spectrometer that creates the same data type via repeated exposures of the same scene with slightly different optical path lengths. The resultant data can be considered identical in format to that shown in FIG. 1.
A multispectral instrument typically works in the long-wavelength infrared (“LWIR”) region, such as the of 8-13 micron wavelength region. Waves in the LWIR region are relatively transmissive in the atmosphere, but can be absorbed by water vapor and other trace gases with increasing absorption in humid conditions and with longer path lengths through the atmosphere. As a result of this absorption, the radiance detected by a multispectral instrument, referred to as “observed radiance” is less than the radiance at the surface, referred to as “surface radiance.” A measure of this absorption is called “atmospheric transmittance” or “transmittance,” which represents the fraction of surface radiance that is not absorbed by the atmosphere. The observed radiance is also impacted by the emission of photons from the atmosphere itself, referred to as “atmospheric radiance,” which tends to increase the observed radiance. FIG. 2 is a diagram of the relationship between observed radiance Ro, surface radiance Rs, atmospheric radiance Ra, and transmittance τ, as represented by the following equation:Ro=(Rs*τ)+Ra  (1)where the downwelling radiance is assumed to be zero (i.e., exhibits “black body behavior”). Many surfaces, including most vegetation, are nearly completely non-reflective in the LWIR region. As such, the effect of downwelling radiance on the observed radiance in this range is near zero.
As shown by Equation 1, to calculate the surface radiance from the observed radiance both the atmospheric radiance and transmittance are needed. Two general approaches to solving this problem that tools have used to estimate atmospheric radiance and transmittance are modeling and in-scene estimation. Modeling tools, such as the Moderate Resolution Atmospheric Transmission (“MODTRAN5”) program, rely on explicit models of the atmosphere with contributions from every gas and each temperature/pressure level of the atmospheric gases. Because the atmosphere varies enormously over time and location, it can be difficult and time consuming to build an accurate model. Such modeling tools are typically only useful to give first approximations without extensive iterations, which are computationally expensive. In-scene estimation tools, such as the In-scene Atmospheric Compensation (“ISAC”) program, measure the relative atmospheric transmission and absorption for all the multispectral bands based on the observation that most LWIR scenes contain a majority of objects that are near black body in behavior. Such in-scene estimation tools generally produce reasonable first estimates for atmospheric radiance and transmission that are not accurate enough to allow atmospheric artifacts to be deciphered from the observed radiance. Although modeling and in-scene estimation tools produce reasonable starting points for estimating atmospheric radiance and transmittance, the resulting surface radiance that is calculated based on those estimates has a substantial amount of noise.