This invention relates generally to the processing of digital images to produce desired tone and color reproduction characteristics. Specifically, this invention makes use of capture and output device information, in conjunction with opto-electronic conversion function (OECF) and preferred reproduction models, to determine image-specific processing based on statistics derived from the image data. The processing determined may be applied automatically or with user input.
Digital cameras and scanners are used to capture image data from a large variety of scenes and originals. A number of automatic approaches are employed to process this data for reproduction; but when reproduction quality is critical, most images are processed manually by experts. Expertly processed images are also overwhelmingly preferred, even by inexperienced viewers, when comparisons are made.
Manual processing is time consuming and must be done by individuals with significant expertise, partly because the controls found in currently available software packages make it difficult to achieve desired tone and color reproduction. Simple controls tend to vary the reproduction in ways that miss the optimum, and complex controls offer too many degrees of freedom. If a way could be found to produce results similar to those produced by experts, either automatically or with simple and intuitive manual adjustments, digital photography would become a much more attractive alternative to conventional photography.
The practice of conventional photography suggests that improvements in this direction are possible. Currently, conventional photographs tend to be superior to automatically processed digital photographs in tone and color reproduction, with photographs processed at professional laboratories being far superior. Yet the flexibility of digital systems is greater than that of conventional systems. Digital photographs have the potential to be better than conventional photographs, and expertly processed digital photographs are currently at least as good. Digital processing approaches that mimic the relatively fixed behavior of conventional photographic systems should be straightforward to develop. Insight into digital processing approaches can also be obtained by examining what experts do manually. If this is done, it is found that most of the decisions made are based on evaluations of the image with respect to the scene or original and with the desired reproduction goal in mind. It should be possible to develop software algorithms that can perform these evaluations and process images accordingly.
Three major factors have hindered progress in this area. The first is that expert manual processing is almost always image-dependent and is based on understood tone and color reproduction objectives; but the development of most digital tone and color reproduction processing has focused on schemes which do not consider the image data, or consider it without regard for established pictorial considerations. The second is that the exact meaning of the image data, with respect to the scene or original, must be known. To date, the approaches used have ignored many non-linearities, such as those introduced by optical flare and other image capture effects, and have concentrated on techniques based almost exclusively on colorimetry. Colorimetry is strictly applicable only when the capture spectral sensitivities are color matching functions, or when the colorants used in the original are known and limited to a number which is not greater than the number of spectral capture channels. With digital cameras in particular, this is frequently not the case. Other difficulties in determining scene physical characteristics have resulted from a lack of standard, accurate measurement approaches. When basic flaws are present in a measurement approach, such as the omission of flare considerations and the fact that the spectral characteristics of the detector preclude calorimetric information from being obtained, attempts to calculate scene values inevitably produce erroneous results. These errors reduce accuracy expectations and mask other error sources, seriously degrading the correlation between captured data and scene characteristics.
The final factors which have hindered progress are the slow recognition of the need for preferred reproduction as an alternative goal to facsimile reproduction, that preferred reproduction is dependent on the scene or original, and that the output and viewing condition characteristics. As mentioned previously, most digital tone and color reproduction processing development has focused on schemes which do not consider the image data. Also, color management approaches based on colorimetry attempt to produce reproductions with colorimetric values similar to those of the original. While calorimetric measures can consider some viewing condition effects, others are not considered and the effects of the scene characteristics and media type on preferred reproduction are ignored.
It is clear that the factors limiting tone and color reproduction quality in digital images stem from an incomplete and sometimes inappropriate global strategy. The inventor has attempted to deal with these problems in two ways: through participation and leadership in the development of national and international standards and with the inventions presented here. The following list specifies gaps in the strategy and the attempts to fill them in:
1. Inaccurate and Non-Standard Device Measurements.
Image capture and output devices are measured in a variety of ways, with various measurement and device effects being ignored. Specifically:
a) Flare and other non-linearities in both capture devices and measuring instruments are frequently not considered, or are measured for a particular condition, and the resulting values are erroneously assumed to be applicable to other conditions.
b) Test targets captured typically have considerably lower luminance ratios than pictorial scenes, so the extremes of the capture device range are truncated or left uncharacterized.
c) Attempts are made to correlate image data to calorimetric quantities in scenes by capturing data of test targets with devices whose channel spectral sensitivities are not color matching functions. Correlations established in this fashion will depend on the test target used and may not apply to image data from other subjects.
d) Measurement precision is frequently specified in linear space, resulting in perceptually large errors for darker image areas. These areas are also most affected by flare, making dark area measurements particularly inaccurate.
e) Measurement geometries and lighting may be inconsistent or inappropriate to the device use conditions.
All of these effects compound to produce device characterization measurements which can be quite inaccurate. A common perception of these inaccuracies is that it is practically impossible to obtain stable measurements and, therefore, measurement accuracy need not be too high. Frequently, assumptions are made about the nature of image data, because it is felt that the assumption will be as close to the real value as a measurement.
However, in conventional photography, standardized density measurement techniques have evolved over decades. These techniques routinely produce repeatable measurements with several orders of magnitude higher accuracy than those obtained for digital systems, which is one of the reasons the less flexible conventional systems are able to outperform current automatic digital systems. Unfortunately, the reason these techniques are so accurate is because they have been refined specifically for conventional photographic materials. A great deal of work will be required to develop similar techniques for the devices and material used in digital systems.
Work has just begun in this area, but significant progress is already being made. A new standard to be issued by the International Organization for Standardization (ISO), xe2x80x9cISO 14524, Photographyxe2x80x94Electronic still picture camerasxe2x80x94Methods for measuring opto-electronic conversion functions (OECFs),xe2x80x9d is almost complete, and work has begun to develop two new standards:
xe2x80x9cDigital still picture camerasxe2x80x94Methods for the transformation of sensor data into standard colour spacesxe2x80x9d and xe2x80x9cPhotographyxe2x80x94Film and print scannersxe2x80x94Methods for measuring the OECF, SFR, and NPS characteristics.xe2x80x9d While these efforts are collaborative and open, the inventor is the project leader for the former two efforts, and participates in the latter.
2. Difficulties in Communicating Device Information Due to a Lack of Standard Data Types, Terms, and Data Formats.
Even if accurate measurements are available, a complete processing strategy requires that the measurements characterize the device in question by completely filling with values an enumerated list of expected measurements. These values must also be provided in the image file in a standard format to be readily usable by a variety of processing software. xe2x80x9cISO 12234/1, Photographyxe2x80x94Electronic still picture camerasxe2x80x94Removable memory, Part 1: Basic removable memory reference modelxe2x80x9d and xe2x80x9cISO 12234/2, Photographyxe2x80x94Electronic still picture camerasxe2x80x94Removable memory, Part 2: Image Data Formatxe2x80x94TIFF/EPxe2x80x9d define the characterization data required and where and how it is to be included in image files. The inventor was a key participant in the development of this standard, particularly the enumeration of the characterization data. It should be noted that because of a lack of consensus concerning the need for some of the data among the members of the ISO working group, some of the data categories are defined as xe2x80x9coptionalxe2x80x9d at present, but the necessary categories are described.
3. How to Deal with Image-Dependent Capture Non-Linearities.
The measurement methods described in the standards mentioned above tell how to measure digital photography system characteristics using various test targets, but they do not deal with methods for estimating image-dependent capture non-linearities. A solution to this problem is described in this patent application.
4. Lack of Specification of Standard, Optimal Methods for Transforming Image Data from the Capture Device Spectral Space to Standard Color Spaces.
A number of methodologies have evolved for transforming capture device data into intermediate or standard color spaces. Many of these methods have merit in particular circumstances, but in many cases are applied inappropriately. The lack of accurate characterization data compounds the problem in that it is difficult to tell if the cause of low quality transformed data is an inappropriate transformation method, inaccurate characterization data, or both.
Another difficulty has been that, until recently, the only standard color spaces used for digital photography were those defined by the CIE (Commission Internationale de L""Éclairage or International Commission on Illumination) based on the human visual system (HVS). For various reasons, it is generally impractical to design digital photography systems that mimic the HVS. Most digital photography systems analyze red, green, and blue (RGB) light; and most output devices modulate these spectral bands. In conventional photography, specific RGB bands are well defined by the spectral characteristics of the sensitizing dyes and colorant dyes used and by standards such as xe2x80x9cISO 7589, Photographyxe2x80x94Illuminants for Sensitometryxe2x80x94Specifications for Daylight and Incandescent Tungstenxe2x80x9d (which defines typical film spectral sensitivities) and xe2x80x9cISO 5/3, Photographyxe2x80x94Density measurementsxe2x80x94Spectral conditions.xe2x80x9d These standards were developed many years ago, but the inventor actively participates in their maintenance and revision.
In digital photography, a wide variety of spectral sensitivities and colorants are used by different systems. Many of these systems are based on RGB analysis and synthesis, but the data produced in capturing a particular scene can vary substantially between capture devices. Also, if the same RGB image data is provided to different output devices, significantly different results will be obtained; and the differences will not be, for the most part, the results of optimization of the image data to the media characteristics.
Over the past five years, companies involved with digital imaging have recognized this problem and invested significant resources in solving it. Some progress has been made, particularly within the International Color Consortium (ICC), an association comprising most of the major computer and imaging manufacturers. However, the efforts of the ICC have been directed at producing consistent output from image data. The metrics employed are based on colorimetry and generally aim to produce output on different devices that is perceptually identical when viewed under a standard viewing condition. This aim is commonly referred to as xe2x80x9cdevice-independent color.xe2x80x9d Device-independent color is an appropriate goal in some cases, but frequently falls short. Different media have vastly different density range and color gamut capabilities, and the only way to make sure that all colors are rendered identically on all media is to limit the colors used to those of the lowest dynamic range (density range and color gamut) medium. This is certainly not desirable, and consequently a number of ICC member (and other) companies are now creating xe2x80x9cICC profilesxe2x80x9d that produce colors from the same image data which vary between devices. (ICC profiles are device-specific transformations in a standard form that ostensibly attempt to transform image data to produce device-independent results.)
The basis for the color science on which device-independent color is based is the behavior of the HVS. Much of this behavior is reasonably well understood, but some is not, particularly the effects of some types of changes in viewing conditions, and localized adaptation. Also, the effects of media dynamic range on preferred reproduction have little to do with the HVS. Appearance models may be able to predict how something will look under certain conditions, but they give no information about observer preferences for tone and color reproduction in a particular photograph.
ICC profiles are currently being produced that attempt to transform captured image data to produce colorimetric values (input profiles), and that take image data and the associated input profile and attempt to transform the colorimetric values to data suitable for output on a particular device (output profiles). These profiles are generally considered to be device-specific, in that a particular input profile is associated with a particular capture device and a particular output profile is associated with a particular output device. This type of association makes sense in view of the philosophy of the approach. If a scene or original is characterized by particular colorimetric values, the goal of the input profile is to obtain these values from the captured data. Likewise, the goal of the output profile is to reproduce the calorimetric values on the output medium. Since the goal is facsimile calorimetric reproduction, the profile should be independent of the scene content or media characteristics.
If the capture device spectral sensitivities and/or colorants used in the original make it possible to determine colorimetric values from captured data, it is theoretically possible for ICC-type input profiles to specify the appropriate transformations. Also, if the characteristics of the capture device do not vary with the scene or original captured and the device spectral sensitivities are color matching functions, a single profile will characterize the device for all scenes or originals. If knowledge of the colorants is required to allow calorimetric data to be obtained, a single profile is adequate for each set of colorants. Unfortunately, flare is present in all capture devices that form an image of the scene or original with a lens (as opposed to contact type input devices, like drum scanners). The amount of flare captured will vary depending on the characteristics of the scene or original. Occasionally, other image-dependent non-linearities are also significant. For ICC profiles to specify accurate transformations for devices where flare is significant, not only must the calorimetric spectral conditions be met, but the image-dependent variability must be modeled and considered. The resulting input profiles are dependent on the distribution of radiances in the scene or original, as well as the capture device used and the colorants (if applicable).
In summary, the primary difficulties with using ICC profiles to specify transformations are:
a) ICC input profiles only allow transformations to CIE color spaces, yet transformations to this type of color space are valid only if the capture device sensitivities are color matching functions, or colorants found in the scene or original are known, and are spanned by spectral basis functions not greater in number than the number of device spectral capture channels. These conditions are almost never met when digital cameras are used to capture natural scenes.
b) The appropriate ICC input profile for a particular device and/or set of colorants to be captured is generally assumed to be invariant with the content of the scene or original. This assumption is not valid with the many capture devices that have significant amounts of flare, such as digital cameras and area array scanners.
c) The measurement techniques used to determine ICC profiles are variable, and the profiles constructed are usually not optimal. Frequently, profiles are not linearized correctly, neutrals are not preserved, and incorrect assumptions are made about the colorants in the scene or original. These inaccuracies are masked by and compounded with the inaccuracies mentioned in a and b.
d) While it is recognized that different output media must produce different calorimetric renderings of the same image data for the results to be acceptable, there is no standard methodology for determining how to render images based on the dynamic range of the scene or original, as compared to the output medium.
The ICC efforts have resulted in a significant improvement over doing nothing to manage colors, but in their current manifestation are not viewed as a substitute for manual processing. Fortunately, the ICC approach is continuing to evolve, and other organizations are also contributing. In particular, there is a proposal in the ICC to allow another standard color space based on a standard monitor. This color space is an RGB space, making it more appropriate for use with many capture devices, particularly RGB-type digital cameras and film scanners. This proposal is also being developed into a standard: xe2x80x9cCGATS/ANSI IT8.7/4, Graphic technologyxe2x80x94Three Component Color Data Definitions.xe2x80x9d The inventor is involved in the development of this standard. Also, the proposed new ISO Work item mentioned previously, for which the inventor is the project leader, xe2x80x9cDigital still picture camerasxe2x80x94Methods for the transformation of sensor data into standard colour spaces,xe2x80x9d is specifically aimed at specifying methods for determining optimal transformations. As these efforts are completed and if the methods for dealing with image-dependent non-linearities of my invention are used, it should become possible to specify capture device transformations that are determined in standard ways and based on accurate measurements.
5. How to Determine Preferred Reproduction Based on the Characteristics of the Scene or Original and the Intended Output Medium.
The first part of the digital image processing pipeline transforms capture device data into a standard color space. Once the data is in such a color space, it is necessary to determine an output transformation that will produce preferred reproduction on the output medium of choice. A method for accomplishing this is contemplated by my invention.
6. How to Take Image Data Processed for Preferred Reproduction on One Output Medium and Transform it for Preferred Reproduction on Another Output Medium.
Frequently, it is necessary to take image data which has already been processed for preferred reproduction on one output device and process it for preferred reproduction on another output device. A method for accomplishing this also contemplated by my invention.
7. How to Implement User Adjustments that Produce Preferred Reproduction with Maximum Simplicity and Intuitiveness.
As stated previously, current manual processing software tends to be overly complicated and offers too many degrees of freedom or is incapable of producing optimal results. The implementation of the preferred reproduction model described in this patent application allows for user specification of key parameters. These parameters are limited in number; and changing them produces transformations which always produce another preferred rendering, limiting the possible outputs to those that are likely to be preferred.
Embodiments of my invention, in conjunction with the above-mentioned international standards under development, solve the above-identified problems by providing a complete strategy for the processing of digital image data to produce desired tone and color reproduction characteristics. The details of this strategy are as follows:
1. A scaled version of the image is constructed by spatially blurring and sub-sampling each channel of the image data. The scaled version is preferably a reduced version, but can be of any scale with respect to the original image. The blurring and sub-sampling are accomplished using one or more filters that first blur the image data using a blur filter with a radius that is primarily related to the number of pixels, rows of pixels, or columns of pixels in the image channel, but can also be affected by other factors, such as the intended output size or pixel pitch, the intended output medium, the numerical range of the image data, etc. Any common blur filter can be used, such as a boxcar averaging or median, a Gaussian blur, etc. The blurred image is then decimated to produce the scaled image, which is stored for future use.
2. The capture device focal plane OECFs are determined for each channel according to ISO 14524 for digital cameras or the standard which results from the new work item under development for scanners. The inverses of these OECFs are then determined, either in functional form or as look-up-tables (LUTs). This information may also be provided by the device manufacturer or included in the image file header with some file formats.
3. The scaled image data is transformed into focal plane data using the inverse focal plane OECFs. Statistical values are then determined for each channel from the transformed data. Typical statistical values are the minimum and maximum focal plane exposures, the mean focal plane exposure, and the geometric mean focal plane exposure. Other statistical values may be determined in some cases.
4. The capture device design and OECFs are evaluated to determine if the capture device has significant image-dependent non-linearities or flare. If image-dependent effects are found, they are modeled. The model to be produced should predict the amounts of non-linearities and flare based on statistical values determined from the scaled image data. Models can be constructed by capturing a variety of scenes or originals (such as ISO camera OECF charts with a variety of luminance ranges and background luminances), determining the flare and non-linearities encountered when capturing these charts, and then correlating the measured values with the scaled image statistics. Flare models can also be constructed by compounding extended point-spread-functions. A flare model may be provided by the device manufacturer, but there is no mechanism at present for including this information in the file format.
5. The estimated camera or scanner OECFs for the image represented by the scaled image are determined for each channel using the OECF measurement standards mentioned, in conjunction with the flare and non-linearity model. The inverses of these OECFs are then determined, either in functional form or as LUTs. These inverse OECFs, which will be referred to as the input linearization information or input linearization tables, are stored for future use.
6. The capture device spectral sensitivities are evaluated and an appropriate transformation to an intermediate spectral or color space is determined. This intermediate color space is preferably a color space appropriate for application of the preferred reproduction model, such as a standard color space. If the intermediate color space is a standard color space, the transformation can be determined according to one of the methods in the proposed new ISO standard. In this case, the input linearization table is used to linearize the captured data, as required by the standard. The transformation may also be provided by the device manufacturer or included in the image file header with some file formats.
7. The scaled image is linearized using the input linearization table and transformed to the intermediate color space using the transformation determined. A luminance channel image is then determined using the equation appropriate for the intermediate color space. Statistical values are then determined from the luminance channel data. Typical statistical values are the minimum and maximum (extrema) scene luminances, the mean luminance, and the geometric mean luminance. Other statistical values may be determined in some cases. The scaled image data is generally not needed after these statistical values are determined.
8. The output device is determined, either by assuming it to be a standard monitor, by asking the user, or by the software (if intended for a specific device). The visual density ranges of all selectable output devices should be known. The viewing conditions under which the output will be viewed may also be specified.
9. The statistical values determined from the luminance channel of the scaled image, the density range of the output device, and the viewing illumination level (if known) are input to the preferred reproduction model. This model calculates an image and output specific preferred tone reproduction curve. This tone reproduction curve is typically applied to RGB channels, to produce preferred tone and color reproduction.
10. The output device electro-optical conversion function (EOCF) characteristics are determined by measuring the output of the device for all possible input digital levels or, in the case of the standard monitor, by using the standard monitor EOCF. An output transformation is then determined by combining the preferred tone reproduction curve with the output device EOCF. This transformation may be expressed in functional form or as a LUT and will be referred to as the output table.
11. The image data for the entire image is linearized using the input linearization tables. It is then preferably transformed into the intermediate color space. This color space can be a standard RGB space, although monochrome image data should be transformed into a luminance-type space, and this processing also may be used to produce desired tone reproduction characteristics with luminance-chrominance type color space data. The output tables are then applied to the linear intermediate color space data to produce digital code values appropriate for a standard monitor or the specified output device. If necessary, standard RGB values or other color space values corresponding to preferred reproduction may be converted to another color space for use by the output device. In this case, the goal of the processing employed by the output device is to produce a facsimile reproduction of the preferred reproduction as expressed in the standard RGB or other original color space. The preferred reproduction should have been determined with consideration of the luminance range capabilities of the output medium.