1. Field of the Invention
The present invention relates to an image processing method, and more specifically, to a method and apparatus for compensating output color components so that a dynamic range of the color components is within a specified range by using a simple algorithm.
2. Description of Related Art
As electronic engineering advances, electronic information provided to users not only includes simple text, but also various forms of multimedia information. The multimedia information provided to users includes still images, moving pictures, animation and sound as well as text information. Above all, moving pictures are very important since they are essential to next generation Video On Demand (VOD) services as well as interactive services. Thus, much work has been done on the standardization of moving pictures.
Further, analog data is now becoming digitalized as a result of advances of digital electronics and various digital image processing techniques have been introduced to deal with vast amount of digital image data effectively. There are several merits of digital image processing. First, analog image information is degraded during processing since noise is inevitably added to an original signal. However, digital image information is less susceptible to noise. Second, digitalized image information can easily be processed by computers and image compression becomes possible due to computer-based image processing.
Generally, digital image processing relates to displaying recorded analog images using computers. Digital image processing was realized by Digital Video Interactive (DVI) technology, which was first introduced in the late 1980's. DVI technology is used to perform sophisticated tasks, which cannot be performed in real time by low performance processors, by using a graphics processor designed for image processing.
In addition, Junction Pictures Experts Group (JPEG) and Motion Pictures Experts Group (MPEG) produced a new coding standard superior to DVI, and it is anticipated that this coding standard will play a major role in digital image processing since it is supported by most companies related to the field. The MPEG standard is still being updated, for example, MPEG II and MPEG III standards have been developed, in order to realize high quality images such as high definition television (HDTV) on personal computers.
Furthermore, image processing techniques which only require main processors rather than separate hardware to process images have been introduced since 1991, and such techniques include QUICKTIME® developed by APPLE CORP., Video for WINDOWS® developed by MICROSOFT CORP., and INDEO® developed by INTEL CORP. These techniques are very useful for personal computers since they are not a large burden on main processors thereof.
As various digital image processing techniques are studied, standardization of these techniques is also required. By way of standardization, many techniques can be compatible with one another so that many applications such as video conferencing, digital broadcasting systems and video phones can be realized. For example, a digital image compression technique used to store information on a recording medium such as a CD-ROM or an optical disk is compatible with a compression technique used for video conferencing.
Conventional image signals are processed in a 3-dimensional color space represented by Red (R), Green (G) and Blue (B) color components (collectively RGB colors) generated by using optical sources. Because RGB colors are primary colors constituting all colors, image signals can be displayed by using these 3 colors.
FIG. 1 is a Venn diagram illustrating a relationship among primary colors used for expressing colors.
Referring to FIG. 1, every color signal can be produced by combining the RGB colors. That is, R combined with G represents a Yellow (Y) signal, G with B represents a Cyan (C) signal, and B and R represents a Magenta (M) signal. When all primary colors are mixed, white light (W) is generated.
FIG. 2 illustrates a wavelength relationship between RGB and CMY color components.
Referring to FIG. 2, R light has the longest wavelength, while B signal has the shortest. Y light has a wavelength between the wavelengths of those of G and R lights. In a similar way, C light has a wavelength between the wavelengths of G and B lights while M light has a wavelength between the wavelengths of B and R lights. Therefore, CMY signals can be produced by mixing two different primary colors.
FIG. 3 is a color diagram of RGB and CMY color components.
The color diagram can be simplified into two triangles, one of which has RGB points as its apexes (RGB triangle), and the other of which has CMY as its apexes (CMY triangle). In this case, all color coordinates in the RGB triangle can be represented as a combination of R, G and B color signals. Similarly, all color coordinates in the CMY triangle can be represented as a combination of C, M and Y color signals. However, a color coordinate which lies outside of each triangle cannot be represented by a combination of color signals corresponding to the triangle. For example, color coordinates outside the RGB triangle (the dashed area in FIG. 3) cannot be expressed using R, G and B. Accordingly, color coordinates outside the CMY triangle cannot be expressed using C, M and Y. Therefore, a wider color gamut can be expressed as more optical sources are used. The color gamut is especially important for displaying high quality images.
As a result, a multi-color display having more than 3 color sources has been introduced.
FIG. 4A schematically illustrates the operation of a conventional 3-color display.
In FIG. 4A, a conventional 3-color display device represents an image by using 3-dimensional color components (R0, G0, B0).
FIG. 4B schematically illustrates the operation of a conventional multi-color display apparatus. Compared to the 3-color display of FIG. 4A, the multi-color display converts input signals to 6-dimensional color components (R0, G0, B0, C, M, Y) and reproduces the converted signal using 6 color sources.
Examples of conventional techniques introduced to convert input signals to multi-color components are as follows. First, U.S. Pat. No. 6,633,302, issued to Olympus Optical Co., Ltd., discloses a method of color conversion using an XYZ color space. That is, in the method, a look up table is used to compress a color range of colors outside the XYZ color range. However, this method is hard to implement when more than 5 color sources are used. second, a Genoese company has introduced a method in which performs color mapping from a 3-dimensional look up table to a 2-dimensional look up table is performed using spectral data. In addition, a 1-dimensional look up table is used to adapt sizes of 2-dimensional color ranges. This method has a drawback that calculating look up tables is troublesome. Furthermore, output quality can be degraded since the maximum chromaticness and brightness expressed by each display are different.
Therefore, a simple method for converting RGB color components into 6 color components comprising RGB and CMY while preventing degradation of an input image and maintaining maximum chromaticness and brightness is required.