The present invention relates to an image processing system for executing edit processing to image data by arranging edit commands for each area.
A digital copying machine comprises an IIT (image input terminal) for reading a manuscript, an IPS (image processing system) for processing image data thus read, and an IOT (image output terminal) for outputting a copy based on the image data by driving a laser printer, for example. In the IIT, image information of the manuscript is picked up as an analog electric signal corresponding to reflectivity using a CCD (charge-coupled device) sensor, and this is converted to multi-gradation digital image data. In the IPS, the image data obtained at the IIT are processed, by executing correction, conversion, editing, etc. In the IOT, a half-tone image is outputted by turning a laser of a laser printer on and off based on the image data processed in the IPS.
In the above digital copying machine, it is possible to output an image from multi-gradation image data depending upon the type by the processing of IPS, i.e. to output a sharp image with edge enhancement in case of characters, a smoothened image by eliminating moire and half-tone dots in case of intermediate tone such as photographs, or a color image with high reproducibility and adjusted definition. Further, it is possible to add edit functions such as trimming (extraction of image) and masking (erasing of image) from the manuscript and, in addition, insertion of logo, coloring, painting, color conversion, negative-positive inversion, scaling down/up, shifting, etc. In contrast to this IPS, the IIT reads the manuscript by signals, separating it into 3 primary colors of R (red), G (green) and B (blue), and converts it to a digital signal and outputs. In the IOT, the image is outputted by overlapping half-tone images in the colors Y (yellow), M (magenta), C (cyan) and K (black). Thus, a color digital copying machine is provided. Accordingly, in a color image processing system such as the above color digital copying machine, color material developing machines are used, and 4 scannings are repeated for development process for each color material, and the full-color image data read from the manuscript are processed each time.
Description will be given now on general features of a color digital copying machine as above, taking an example on an arrangement proposed by the present applicant (e.g. U.S. patent application No. 482,977, filed Feb. 22, 1990, now U.S. Pat. No. 5,113,251 issued May 12, 1992). FIG. 1 is a block diagram for a conventional-type color digital copying machine.
In FIG. 1, the IIT 100 reads a color manuscript by separating it into 3 primary colors of B, G and R using a CCD line sensor and converts it to digital image from data. IOT 115 performs exposure and developing by a laser beam and reproduces a color image. The components from the END conversion circuit 101 to the IOT interface 110 arranged between IIT 100 and IOT 115 constitute an edit processing system of the image data (IPS: image processing system). The image data of B, G and R are converted to colors of Y, M and C, and further, to K, and a color signal corresponding to each developed color in each development cycle is outputted to the IOT 115.
In the IIT 100, one pixel is read in size of 16 dots/mm for each color of B, G and R using a CCD sensor, and the data are outputted in 24 bits (3 colors.times.8 bits; 256 gradations). The CCD sensor is provided with filters for B, G and R on its upper surface, and it has a length of 300 mm with density of 16 dots/mm. Scanning of 16 lines/mm is performed at a process speed of 190.5 mm/sec, and the read data are outputted at a speed of 15M pixels per second for each color. In the IIT 100, analog data of pixels of B, G and R are converted from reflectivity information to density information through log conversion and further to digital data.
In the IPS, color separation signals of B, G and R are inputted from the IIT 100. After various data processings to increase color reproducibility, gradation reproducibility, definition reproducibility, etc., a color signal of a development process color is outputted to the IOT through on-off conversion. The END (equivalent neutral density of a color) conversion module 101 adjusts (converts) the signal to a gray-balanced color signal, and a color masking module 102 converts that signal to a signal corresponding to a quantity of color material of Y, M and C through matrix computation of B, G and R signals. A manuscript size detection module 103 detects manuscript size during pre-scanning and erases (frame erasing) platen color during manuscript reading scanning. A color conversion module 104 converts color specified in a specific area according to an area signal inputted from an area image control module. A UCR (under color removal) & black generation module 105 generates an adequate quantity of K not to cause color turbidity and decreases the equivalent quantity of Y, M and C according to the above quantity. At the same time, the K signal and the signal after under color removal of Y, M and C are gated according to each signal in a mono-color mode and a 4-full-color mode. A space filter 106 is a non-linear digital filter provided with a function to restore blur and a function to remove moire. TRC (tone reproduction control) module 107 performs density adjustment, contrast adjustment, negative/positive inversion, color balance adjustment, etc. to increase reproducibility. A scaling up/down module 108 scales up or down in fast scanning direction. The scaling up and down in slow scanning direction is performed by adjusting the scanning speed of the manuscript. A screen generator 109 converts color signal of process color expressed in multiple gradations to binarized signals according to gradations and outputs them. The binarized color signals are outputted to IOT 115 through an IOT interface module 110. An area image control module 111 comprises an area generation circuit and a switch matrix. An edit control module comprises an area command memory 112, a color palette video switch circuit 113, and a font buffer 114, etc. It performs various edit controls.
In the area image control module 111, 7 rectangular areas and priority can be set in the area generation circuit, and area control information is set on the switch matrix depending upon each area. As the control information, there are information for color conversion, color mode (i.e. mono-color or full-color), modulation select information for photograph or characters, TRC select information, and screen generator select information, etc. These are used for the control of color masking module 102, color conversion module 104, UCR module 105, space filter 106, and TRC module 107. The switch matrix can be set by software.
The edit control module reads a manuscript of circular graph, not rectangular, and performs painting processing to fill a specified area of any shape with a specified color. An area command of 4 bits is written on 4 plane memories, and an edit command at each point of the manuscript can be set with 4 bits of 4 plane memories.
FIG. 2 shows an arrangement of plan memories, which comprise two binary planes for work and 4 planes for picture depicting. The plane memories perform processing in a predetermined area and there is no need to have as high resolution as that of an inputted image. Thus, resolution is reduced to 4 bits/mm to decrease memory capacity. The plane memories comprise four planes in A3 size of 432 mm in slow scanning direction and 300 mm in fast scanning direction, and the color and pattern corresponding to a bit image written on the four planes are delivered. Therefore, processings of 2.sup.4, i.e. 16 types, can be executed. The functions can be roughly divided to: "coloring within a closed area" (painting) to paint a white portion in a closed area containing a specified point with any color and pattern as desired, and "coloring in a rectangular area" to paint a space within a rectangular area specified by two points with any color and pattern as desired. As such processings, there are: coloring within a frame by specifying a point within an area, color conversion to convert black color on a black/white manuscript to a color desired by specifying an area by a marker, netting to leave an original image of the manuscript, masking to paint a space within an area with white color (to make it transparent), trimming to paint a space outside an area with white color, specified shifting similar to extraction, and painting not to leave original image of the manuscript.
FIG. 3 shows the relationship between contents of pictures on planes and area commands. A plane PW for work incorporates binarized data during painting scanning, for example. It can also be used for incorporating a marker area during marker scanning. A plane PM for work is used for depicting a picture in the painted area and for preparing an extraction area. Planes P3-0 for storing commands use bit patterns as area commands. In this case, the relationship between the contents of pictures on the planes P3-P0 and the area commands is as shown in FIG. 3. Specifically, if there are P3, P2, PI and P0 with 4 bits, area command for an area (1) in this figure is "0111.sub.B " (07.sub.H) because the plane P3 is "0", and each of the planes P2, P1 and P0 is "1". The area command for an area (2) is "1010.sub.B " (OA.sub.H) because each the planes P3 and PI is "1". The area command of an area (3) is "0000.sub.B " (0) because all planes are "0".
FIGS. 4A-4C are drawings for explaining picture depicting with priority on later designation.
As shown in FIG. 4A, in case a user sets an area 1 and then an area 2, the area 1 is written on a plane memory by an area command 1010.sub.B (10) as shown in FIG. 4B. Then, the area 2 is written over it by an area command 0011.sub.B (3). As the result, priority is given on the area 2 over the area 1.
FIGS. 5A-5G show examples of a plane memory development processing system of conventional type.
In the conventional system, extraction areas "a" and "b" are depicted on a plane PM for work and a plane P0 for command setting, and a painting area "c" is developed on planes P0-P3 for command setting as area command 2 as shown in FIG. 5C. In the development processing (0010.sub.B) in this case, the area "c" is copied by OR logic processing on the plane P1, and it is copied on the planes P0, P2 and P3 by inverted AND logic processing. For a painting area "d", similar processing is performed as shown in FIG. 5D as area command 3 (0011.sub.B).
Then, as shown in FIG. 5E, the extraction areas "a" and "b" depicted on the plane PM for work are copied by AND logic processing on the planes P0-P3 for storing commands, and a deletion area "e" is developed on the planes P0-P3 for storing commands as an area command 4 as shown in FIG. 5F. In the development processing (0100.sub.B) in this case, the area "e" is copied by OR logic processing on the plane P2 and is copied on the planes P0, P1 and P3 by inverted AND logic processing. Finally, as shown in FIG. 5G, the extraction areas "a" and "b" depicted on the plane PM for work are cleared.
As described above, there are many deletion areas in the conventional system. For example, in case of edit processing using a table, the same command is developed on the planes P0-P3 for storing commands for each area. For this reason, there arise problems of low processing efficiency and much processing time.