In a generalized way, a digital image is a set of one or more two-dimensional numeric arrays in which each array element, or pixel, represents an apparent brightness measured by an imaging sensor. The brightness value at each pixel is often represented by an integer number called a digital number (DN). Digital images are commonly generated by remote imaging systems that collect imagery in the visible and infrared regions of the electromagnetic spectrum. Images collected from such systems are used in numerous applications by both commercial and government customers.
When collecting digital imagery, specific bands of electromagnetic energy from the area being imaged are commonly collected at several imaging sensors. For example, in many imaging systems, several spectral bands of electromagnetic energy are collected at imaging sensors, such as, a red light band, a green light band, a blue light band, and a near infrared band. Imaging systems may also include other spectral bands within the visible bands and/or within the middle infrared (a.k.a., shortwave infrared) bands. An image generated from such a system is referred to as a multispectral digital image. In such a case, a set of DN values exist for each line and column position in the multispectral digital image with one DN value allocated to each spectral band. Each DN represents the relative brightness of an image in the associated spectral band at the associated pixel location in the image. When generating a multispectral digital image, data from the imaging sensors is collected, processed, and images are produced. Images are commonly provided to customers as a multispectral image file containing imagery from each of the spectral bands. Each band includes DNs on, for example, an 8-bit or 11-bit radiometric brightness scale representing the radiance collected at the respective sensor for an area of the scene imaged. Several methods exist for processing data from each band to generate an image that is useful for the application required by a user. The data is processed in order to provide an image that has accurate color and contrast for features within the image.
The DN values generated from a particular imaging sensor have limited range that varies from image source to image source according to the associated bit depth. Commonly used bit depths are 8 bits and 11 bits, resulting in DNs that range from 0 to 255 and 0 to 2047, respectively. Digital images are generally stored in raster files or as raster arrays in computer memory, and since rasters use bit depths that are simple powers of base 2, image DNs may be stored in rasters having 1, 2, 4, 8, or 16 bits, with 8 bit and 16 bit being most common. It is common practice to reserve special DN values to represent non-existent image data (e.g., 0, 255, and/or 2047). The corresponding pixels are called blackfill. Actual image data will then have DNs between 1 and 254 or between 1 and 2046.
Digital images, following collection by an imaging system, are commonly enhanced. Enhancement, in the context of a digital image, is a process whereby source-image DNs are transformed into new DNs that have added value in terms of their subsequent use. Commonly, the data from each band of imagery is enhanced based on known sensor and atmospheric characteristics in order to provide an adjusted color for each band. The image is then contrast stretched, in order to provide enhanced visual contrast between features within the image. Commonly, when performing the contrast stretch, the average radiance from each pixel within a particular band is placed in a histogram, and the distribution of the histogram is stretched to the full range of DNs available for the pixels. For example, if each band includes DNs on an 8-bit radiometric brightness scale, this represents a range of DNs between 0 and 255. The DNs from a scene may then be adjusted to use this full range of possible DN values, and/or the DNs from a scene may be adjusted to obtain a distribution of DNs that is centered about the mid-point of possible DN values. Generally speaking, the visual quality of the contrast stretch achieved using normal contrast stretch algorithms is highly dependent on scene content. Many contrast stretch algorithms change the color content of the imagery resulting in questionable color content in the scene. For example, if the distribution of DNs is not centered within the range of possible DN values, such a contrast stretch can skew the DNs resulting in a color offset and, in an image containing structures, a house may appear as being the wrong color. In addition it is often difficult to decide what stretch to apply to a given image. A user often balances the trade-offs between acceptable contrast and acceptable color balance when choosing a Commercial Off The Shelf (COTS) stretch to apply to a given image. While useful in applications where users may be accustomed to color distortion, in applications where a user is not accustomed to such a color skew, it may result in customer dissatisfaction for applications where users are not accustomed to such a color skew. For example, an employee of a firm specializing in the analysis of digital imagery may be accustomed to such a color skew, while a private individual seeking to purchase a satellite image of an earth location of interest to them may find such a color skew unacceptable.
Other methods for enhancing contrast in digital images may be used that preserve the color of the image. While such methods preserve color, they are generally quite computer intensive and require significant amounts of additional processing as compared with a contrast stretch as described above. For example, due to inadequacies in COTS stretch algorithms, images may be stretched manually by manipulating image histograms to achieve the desired result. This can be a very time-consuming, labor-intensive process. Another method of performing a color preserving contrast stretch follows three steps. First, a processing system converts RGB data to a Hue Intensity Saturation (HIS) color space. Next, a contrast stretch is applied to the I (Intensity) channel within the HIS color space. Finally, the modified HIS is converted back to the RGB color space. By adjusting the Intensity channel in the HIS color space, the brightness of the image is enhanced while maintaining the hue and saturation. The image in the RGB color space thus has enhanced contrast while maintaining color balance. This technique is reliable, however, it requires significant additional processing as compared to a contrast stretch performed on RGB data as previously described. A major drawback to this type of stretch involves the significant amounts of computer processing time involved in converting from RGB to HIS space and back.