Color management systems (e.g., ICC/WCS-type systems) rely on device-to-device gamut mapping. This typically requires two gamuts, being the input device gamut and the output device gamut, to transform source colors to destination colors. This is due to the fact that all possible colors of a source color space may occupy a different volume if compared to all possible colors of a destination device. This is referred to as gamut mismatch.
Gamut mapping algorithms often use compression to address gamut mismatch, where in-gamut colors are compressed into a smaller gamut and where out-of-gamut colors are placed in between the smaller gamut and the destination device gamut boundary. As a result, device-to-device gamut mapping may introduce unnecessary color compression and consequently loss of saturation on gamut mapped images.
Furthermore, such device-to-device gamut mapping typically assumes that images have a color range which is smaller than the color range of the encoding color space. In other words, every device color space covers a certain range of colors. However, colors presented on an image do not necessarily span the entire color range available for a device. This is seen to correspond with underutilization of the encoding color space, and typically results in suboptimal color reproduction.
Underutilization of a color space may also be realized when an image is re-encoded, where an image originally encoded in one color space is re-encoded into another color space which is larger than the original color space. When re-encoding the image into the larger color space, the gamut of the original image, which is determined from the encoding profile, is lost and the gamut of the encoding space is all that is available. Consequently, subsequent gamut mapping will typically produce suboptimal color reproduction.
Thus, there is a need for systems and methods for improved gamut mapping when converting image data from a source device to a destination device.