1. Field of the Invention
The present invention relates to an image process apparatus and method which perform an under color removal process for inputted image data.
2. Related Background Art
In recent years, color printers applying various recording systems have been developed as output apparatuses for a color image. In these printers, an ink-jet recording apparatus has been widely used, because such apparatus has many advantages. That is, the ink-jet recording apparatus can be manufactured at low cost, can print a high-quality image on various kinds of recording media, can easily be made compact in size, and the like.
Numbers of image data which are outputted by such color printer frequently correspond to an output apparatus which utilizes a light emission element such as a CRT (cathode-ray tube) monitor or the like. Therefore, such image data are composed of R (red), G (green) and B (blue) signals.
The color printer converts such RGB signals into C (cyan), M (magenta) and Y (yellow) signals or C (cyan), M (magenta), Y (yellow) and K (black) signals, by using an image process means. An image process method which is performed by such image process means has been proposed as U.S. patent application Ser. No. 08/711,953 filed on Sep. 6, 1996, by the same applicant as that of the present application.
FIG. 8 is block diagram for explaining a concept of such image process method proposed by the same applicant as that of the present application.
It is assumed that the image data consists of eight bits for each of RGB colors, and xe2x80x9ceight bitsxe2x80x9d in the present application represents integers from 0 to 255.
The image data is inputted into an image input means 20001 and then R, G and B data each consisting of eight bits are transferred to a luminance and density conversion means 20002. The luminance and density conversion means 20002 performs a luminance and density converting process on the R, G and B data to convert these data into C, M and Y data each consisting of eight bits.
Subsequently, a black component generation means 20003 generates a black component K on the basis of minimum values of the C, M and Y data. If it is assumed that a function to be used for calculating the minimum value is min( ), C1, M1, Y1 and K1 data each consisting of eight bits and outputted from the black component generation means 20003 are obtained by the following equations.
C1=C
M1=M
Y1=Y
K1=min(C, M, Y)
Subsequently, a masking means 20004 performs a masking process on the C1, M1, Y1 and K1 data to output C2, M2 and Y2 data.
Subsequently, an under color component separation means 20005 performs a process on the basis of following equations, to output C3, M3, Y3 and U data.
U=min(C2, M2, Y2)
C3=C2xe2x88x92U
M3=M2xe2x88x92U
Y3=Y2xe2x88x92U
Subsequently, an under color process means 20100 generates C4, M4, Y4 and K4 data each consisting of eight bits, on the basis of the under color component data U. The under color process means 20100 is composed of a black component generation means 20006, a cyan component generation means 20007, a magenta component generation means 20008 and a yellow component generation means 20009, and thus generates the C4, M4, Y4 and K4 data each consisting of eight bits by using functions KGR( ), CGR( ), MGR( ) and YGR( ) shown in FIG. 9. That is, the following relation is satisfied.
C4=CGR(U)
M4=MGR(U)
Y4=YGR(U)
K4=KGR(U)
Subsequently, the C3, M3 and Y3 data outputted from the under color component separation means 20005 and the C4, M4 and Y4 data outputted from the under color process means 20100 are respectively synthesized by a cyan component output means 20011, a magenta component output means 20012 and a yellow component output means 20013, to respectively generate C6, M6 and Y6 data. Such processes are performed on the basis of following equations.
C6=C3+C4
M6=M3+M4
Y6=Y3+Y4
In this case, if values of the C6, M6 and Y6 data are equal to or smaller than xe2x80x9c0xe2x80x9d, such values are determined as xe2x80x9c0xe2x80x9d. On the other hand, if these values are equal to or larger than xe2x80x9c256xe2x80x9d, such values are determined as xe2x80x9c255xe2x80x9d. On the basis of the C6, M6, Y6 and K4 data outputted through such processes, an output gamma correction means 20101 respectively output C7, M7, Y7 and K7 data each consisting of eight bits. The output gamma correction means 20101 is composed of a black output gamma correction means 20014, a cyan output gamma correction means 20015, a magenta output gamma correction means 20016 and a yellow output gamma correction means 20017, and calculates the following equations by using functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ).
C7=CGAM(C6)
M7=MGAM(M6)
Y7=YGAM(Y6)
K7=KGAM(K4)
The output gamma correction means performs the conversion to linearize the relation between the inputted values (i.e., C6, M6, Y6 and K4 data) and optical reflection densities of printed or outputted results. Ordinarily, each of the functions KGAM( ), CGAM( ), MGAM( ) and YGAM( ) shown in FIG. 10 consists of 256 reference tables.
The functions CGR( ), MGR( ), YGR( ) and KGR( ) are set such that, in a case where the inputted image data satisfy the equations R=G=B, the printed results are obtained by an achromatic color. That is, such functions have structure for compensating for that, in a case where the image data represents gray scale, the printed result is also represented by the gray scale.
However, in such image process method, in a case where the inputted image data includes a color component other than the under color, there has been a problem that tonality or gradient (i.e., linearity between the inputted value and the outputted result) can not be compensated.
FIG. 11 is a view showing the relation between a blue signal (B) and an optical reflection density in the conventional image process apparatus.
In this case, the blue signal (B) has a value which represents blue components in the inputted C2, M2 and Y2 data (satisfying C2 =M2 and min(C2, M2, Y2)=Y2), and can be obtained by the following equation.
B=C2xe2x88x92Y2
In FIG. 11, it can be understood that, in respect of cyan (C), the optical reflection density of a blue signal 127 is lower than that of a blue signal 255. That is, the tonality in blue is not compensated. Especially, as the color component other than the achromatic color becomes large, i.e., as the inputted image becomes vivid, the tonality becomes.
Such a tendency is remarkable in an ink-jet record system. That is, in recording dot groups which represent a paper surface and an achromatic component (i.e., background color) by the gray scale, in a case where the recording dot groups are recorded such that the group representing the achromatic color component comes into contact with or overlaps the group representing the color component other than the achromatic color component on condition that the gray scale is optically compensated, such tendency is remarkable because the ink-jet record system changes the structure of a dye or a pigment on a surface of a recording medium.
An object of the present invention is to provide image process apparatus and method which compensate for tonality or gradient even in a case where a color component other than an under color component is included in input image data.
In consideration of a fact that the above-described tendency is remarkably seen in a blue region, an another object of the present invention is to compensate for the tonality especially in the blue region.
In order to achieve the above objects, an image process method is provided comprising:
an input step of inputting image data; and
an under color process step of performing an under color process according to a color region to which the image data belongs, to generate a plurality of component signals including a black component signal,
wherein the color region is defined by hue.
Further, an image process method is provided comprising:
an input step of inputting image data;
a judgment step of judging a color region of the image data; and
an under color process step of performing an under color process according to the color region,
wherein, in the under color process step, a blue region is subjected to the under color process which is different from the under color process for other color regions.
Furthermore, an image process method is provided comprising:
an input step of inputting image data;
a judgment step of judging a color region of the image data; and
an under color process step of performing an under color process according to the color region,
wherein the under color process step selectively performs a first under color process or a second under color process in accordance with the color region judged in the judgment step, in the first under color process a black component is not added to a vivid portion, and in the second under color process the black component is added to the vivid portion.
The above and other objects of the present invention will become apparent from the following detailed description when read in conjunction with the accompanying drawings.