(1) Field of the Invention
This invention relates generally to digital color image processing and relates more particularly to a method for a fast digital color saturation control with low computational effort.
(2) Description of the Prior Art
Color is the perceptual result of light in the visible region of the spectrum, having wavelengths in the region of 400 nm to 700 nm, incident upon the retina. The spectral distribution of light relevant to the human eye is often expressed in 31 components each representing a 10 nm band.
The human retina has three types of color photoreceptors cone cells, which respond to incident radiation with somewhat different spectral response curves. Because there are exactly three types of color photoreceptor, three numerical components are necessary and sufficient to describe a color, providing that appropriate spectral weighting functions are used.
Pixel values in accurate gray-scale images are based upon broadband brightness values. Pixel values in accurate color images are based upon tristimulus values. Color images are sensed and reproduced based upon tristimulus values, whose spectral composition is carefully chosen according to the principles of color science. As their name implies, tristimulus values come in sets of three. In most imaging systems, tristimulus values are subjected to a non-linear transfer function that mimics the lightness response of vision. Most imaging systems use RGB values whose spectral characteristics do not exactly match the tristimulus values of the human eyes.
A combination of real world physical characteristics determines what the human vision system perceives as color. A color space is a mathematical representation of these characteristics. Color spaces are often three-dimensional. There are many possible color space definitions.
Digital imagery often uses red/green/blue color space, known simply as RGB. Said RGB space is illustrated in FIG. 1 prior art. The red/green/blue values start at zero at the origin and increase along the three axes. Because each color can only have values between zero and some maximum (255 for 8-bit depth), the resulting structure is a cube. We can define a color simply by giving its red, green, and blue values, or coordinates, within the color cube. These coordinates are usually represented as an ordered triplet. Several colors are shown in FIG. 1 prior art mapped into their locations in the 8-bit RGB cube, or color space. Black has zero intensity in red, green, or blue, so it has the coordinates (0,0,0). At the opposite corner of the color cube, white has maximum intensities of each color, or (255, 255, 255). Cyan, magenta and yellow, which are combinations of green and blue, red and blue, and red and green, respectively, are at (0, 255, 255), (255, 0, 255) and (255, 255,0). Finally, note that a middle gray is at the exact center of the cube at location (128, 128,128). Other colors can be described by specifying their coordinates within this cube.
The Cyan, Magenta, Yellow, known as CMY, is often used in printing. The CMY color space is related to the RGB space by being inverse of it. The origin of this color space is not black, but is white, and the primary axes of the coordinate system are not red, green and blue but are cyan, yellow and magenta. The color red in this space is a combination of yellow and magenta, while green is composed of yellow and cyan. In the printing industry, where images start with a white piece of paper (the origin) and ink is applied to generate colors, the CMY color space is commonly used.
Another color space, often used by artists, is Hue, Saturation and Intensity (or HSI). In this color space scenes are not described in terms of red, green, and blue, but as hue, saturation, and intensity (HSI). We see things as colors, or hues that either have a washed-out look or have deep, rich tones. This means having low or high saturation, respectively. Hue is the attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors, red, green and blue, or a combination of them. Saturation is the colorfulness of an area judged in proportion to its brightness.
Images that digital cameras deal with are often obtained through tri-color filter sets, like RGB or CMY, and normally the processing of the images are done in either RGB or CMY color space. Some image operations are, however, complicated to perform in these color spaces, but become trivial in a different color space. This is the case with saturation control, which is best done in HSI space. This is obvious since saturation is one of the original coordinates of HSI. The conversion between color spaces, however, requires significant computer power that takes time and consumes significant electrical power in a battery-powered device as e.g. a digital camera.
By color saturation control is meant the process to increase or decrease the amount of color in an image without changing the image contrast. When saturation is lowered the amount of white in the colors is increased (washed out). By adjusting the color saturation the same image can be everything from a black and white image to a fully saturated image having strong colors.
The color saturation control is best explained using the Hue, Saturation, and Intensity (HSI) color space as shown in FIG. 2 prior art. HSI is a very different three-dimensional color space from RGB or CMY. FIG. 2 prior art illustrates a common representation of HSI color space. The cone shape has one central axis representing intensity. Along this axis are all gray values, with black at the pointed end of the cone and white at its base. The greater the distance along this line from the pointed end, or origin, the brighter or higher is the intensity. If the cone is viewed from above, it becomes a circle. Different colors, or hues, are arranged around this circle—the familiar color wheel used by artists. Hues are determined by their angular location on this wheel. Saturation, or richness of color, is defined as the distance perpendicular to the intensity axis. Colors near the central axis have low saturation and look pastel. Colors near the surface of the cone have high saturation.
In prior art said HSI color space is used to change the color saturation of an image. This is relatively simple to do in HSI color space. First, the original image would have to be converted to HSI. Second, the saturation would be modified. Said modification of the saturation is simple because saturation is one of three coordinates of the HSI color space and thus only one coordinate of the pixels has to be changed. The other two coordinates remain unchanged. Finally, the image would have to be converted back to RGB. The same process applies to the CMY color space, which is very close in structure to the RGB color space. The image would have to be converted from CMY to HSI, then the color saturation would have to be modified and finally it would have to be converted back to CMY. It is obvious that a significant computational effort is required for all these conversions.
U.S. Pat No. (5,555,031 to Van Rooij) describes how in a video signal processing circuit, an adaptive signal compression is realized by correcting the color saturation by multiplication of color difference signals (R-Y, G-Y) by a same correction factor in such a way that color signal values (R, G, B) remain below their respective maximally allowed values without the luminance (Y) being limited as well. Preferably, the correction factor is obtained in dependence upon a non-linearly compressed luminance signal (Y′).
U.S. Pat. No. (5,561,474 to Kojima et al.) discloses a video camera having a processing circuit, which converts electrical signals obtained from an imager into a luminance signal and color-difference signals. The color video camera further includes a memory, which stores a table of values designating color saturation levels and corresponding to a specific hue of a background color. The memory outputs a color saturation level based on the color difference signals obtained by imaging an object on a background of the specific hue. This color saturation level is compared with the luminance signal, and based on the comparison results the luminance signal and color difference signals are separated into components corresponding to the background area and components corresponding to the object area.
U.S. Pat. No. (5,852,502 to Beckett) shows an apparatus and a method for producing series of high-resolution color composite images. The digital camera has an optical assembly that directs visual images to a high-resolution monochrome sensor and a lower resolution color sensor. During the processing of the composite image, the monochrome grayscale value becomes the composite frame grayscale value, the color frame hue value becomes the composite frame hue value, and the color frame saturation value becomes the composite frame saturation value. Processing of the monochrome and color images is achieved on the pixel level. A processor calculates the grayscale value for each pixel in each successive monochrome and color image frame. The processor also calculates the hue value (color) and the saturation value (amount of color) for each pixel in each successive color image frame.