1. Field of the Invention
The present invention relates a device and a method for interpolating image data comprising a dot-matrix image and a medium on which a program for interpolating the image data is recorded.
2. Description of the Prior Art
An image is represented as dot-matrix pixels when treated in a computer etc., and each pixel is represented by a graduation value. For example, a photograph and computer graphics are sometimes displayed on a screen of the computer by 640-dot pixels in the horizontal direction and 480-dot pixels in the vertical direction.
On the other hand, color printers have recently been improved in their performances rapidly and now have a high accurate dot density, for example, 720 dpi. When an original image composed of 640xc3x97480 dots is printed so that a printed image corresponds to the original one in the dots, the printed image becomes smaller than the original one. In this case, images to be printed have various gradation values, and the color printers have different resolutions. Accordingly, the original image data is required to be interpolated between dots before converted to printing image data.
The prior art has provided, as techniques for interpolating the dots, a nearest neighbor interpolation method (hereinafter, xe2x80x9cnearest methodxe2x80x9d) and a cubic convolution interpolation method (hereinafter, xe2x80x9ccubic methodxe2x80x9d). Further, Japanese patent publication No. 6-225140A discloses a technique for providing dot patterns so that an edge takes such an enlarged form as to be smoothed when edge smoothing is performed after dots have been interpolated.
The aforesaid interpolation techniques have the following problems. The nearest and cubic methods have their respective advantages and disadvantages. On the other hand, there have recently been many cases where a single document to be printed contains a plurality of types of objects to be processed. Accordingly, when an interpolating process is carried out for an object to be processed, the quality of result of interpolation is reduced with respect to a processing mode for which the interpolating process is ill fitted.
Meanwhile, in the invention disclosed in Japanese patent publication No. 6-225140A, the number of patterns becomes enormously large when a color image is premised on, so that it is difficult to previously prepare the patterns.
Further, noise pixels are sometimes produced due to an error in the operation when the pixels are generated at a low resolution regarding metacommmand pixels. Such noise pixels are also enlarged by the interpolating process.
The present has been made in view of the foregoing problem and an object of the invention is to provide a device and method for interpolating image data in which a satisfactory result can be achieved when a plurality of types of objects to be processed is contained, and a medium on which a program for interpolating the image data is recorded.
To accomplish the object, the invention of claim 1 provides an image data interpolating apparatus which obtains image data containing attribute information capable of distinguishing a type of image for every pixel and enlarges the image data by an interpolating process, the device comprising a readout unit which reads out the image data, an interpolating unit which distinguishes a plurality of image types of the pixels based on the attribute information and applies one of a plurality of interpolating processes differing for every one of the image types to each one of the pixels, and a synthesizing unit which synthesizes the pixels interpolated by the different interpolating processes.
In the invention of claim 1 constructed as described above, the image data is obtained and enlarged by the interpolating process. The image data contains attribute information capable of recognizing a type of image for every pixel. When the readout unit reads out the image data, the interpolating unit distinguishes image types of the pixels based on the attribute information and applies one of a plurality of interpolating processes differing for every one of the image types to each one of the pixels, and the synthesizing unit synthesizes the pixels interpolated by the different interpolating processes.
More specifically, the images include several types, and a most suitable pixel interpolating process differs according to the types. Accordingly, the image data containing the several types of images is recognized for every type of image and interpolated. The interpolating process and the synthesizing process need not be carried out separately but may be united together.
The image data is an ordinary one representing a pattern constituted by dot-matrix pixels and should not be limited particularly to a picture as a figure, photograph or characters. Further, the image data itself may be a set of dots but need not represent the respective dots. For example, the image data may be composed of drawing commands for drawing an image or fonts comprising vector information.
The image data contains several attributes differentiating properties of images and is held so as to be read out with the attributes being recognized. The image data may previously be prepared or may newly be written onto a virtual region on the basis of the image data. As an example suitable for this case, the image data interpolating apparatus of claim 2 is constructed so that in the image data interpolating apparatus of claim 1, a virtual drawing unit is provided which inputs the plurality of types of image data having different image types to superpose the image data in a predetermined order and renders the image types distinguishable, thereby performing a drawing operation in a virtual region, wherein the readout unit reads out the image data from the virtual region.
That is, the virtual drawing unit superposes the image data in the predetermined order, rendering the types in the image data distinguishable, to thereby perform the drawing operation.
The types in the image data are recognizable for every pixel. Various techniques to render the types recognizable can be employed. For example, an attribute area may separately be provided so that types of individual data are written as attributes onto the virtual region. Consequently, the type of each pixel can be found when the attribute area is referred to. In this case, writing may be performed by the virtual drawing unit.
The virtual region may be prepared for every type of image data and have a layer structure with a text screen and a natural screen. The enlarging process may be carried out while the image data is being input from the layer structure by an application. Furthermore, a part of the image data recognizable of the attribute of the image data may be read out for every type, and the remaining part of the image data may be undistinguishable.
The readout unit reads out the image data of every pixel for every type. For example, when the type can be determined from the attribute area, the readout unit selects the image data to be read out while referring to the attribute area.
Further, since a two-dimensional processing is performed in the interpolating process, the image data needs to be input accordingly. For this purpose, when the image data is read out from the virtual region, a plurality of lines of the image data may be input for the interpolating process. As a result, a two-dimensional interpolating process can be realized.
Various types of interpolating process may be employed. For example, the interpolating process by the cubic method is suitable for a natural image though unsuitable for a business graph. On the other hand, the nearest method is suitable for a non-natural image such as the business graph though unsuitable for the natural image. Whether the image data is a natural image or a non-natural image is a kind of characteristic of the image. The interpolating process is selected according to such a characteristic. Further, the interpolating process for the natural image can sometimes be changed depending upon an object. For example, the interpolating process may be changed between a daylight photograph and a night photograph. More specifically, the characteristic of the image may only affect the result of interpolation when the interpolating process is changed.
As another example, the interpolating unit may be provided with pattern data corresponding to presence or absence of pixel information in a predetermined area and interpolation pixel information with a predetermined interpolating scale factor corresponding to each pattern data. The interpolating unit may input a pixel of the corresponding area from the virtual region and compare the pixel with the pattern data, so that the interpolating process is performed on the basis of the interpolation pixel information corresponding to the matched pattern data.
In the case of the above-described construction, the interpolating process is carried out by the pattern matching with respect to a predetermined small area. More specifically, the pattern data corresponding to presence or absence of the pixel information corresponding to the small area is prepared, and a pixel in the corresponding area is read out from the virtual region and compared with the pattern data. The interpolating process is performed on the basis of the interpolation pixel information of a predetermined interpolating scale factor corresponding to the pattern data which matches the pattern data. Accordingly, an expected noise pixel can be prepared as the pattern data and the interpolation pixel information in which the noise pixel is deleted according to the pattern data.
In the aforesaid pattern matching, renewing all the comparison data is troublesome when an object area is moved so that a new pattern matching is performed. In this case, the pattern data may be a rectangular area containing pixels whose number corresponds to a concurrently processible data width, and a new sequence of pixels is incorporated into the comparison data in a moving direction of the rectangular area by a first-in first-out process so that the matching with the pattern data is continued.
In the above-described arrangement, the pattern matching can be carried out by one operational processing per area in the case of a rectangular area containing pixels whose number corresponds to a concurrently processible data width. Further, when the object area is moved so that a new pattern matching is carried out, not all the comparison data need be renewed, and a new sequence of pixels is incorporated into the comparison data in a moving direction by the first-in first-out process. More specifically, the comparison with pattern data of 16 pixels is performed in the pattern matching of pixels whose number is 4xc3x974. When a square area is moved by one pixel, information about three rows of pixels does not substantially change. Information about presence or absence of one row of four pixels at the forward side relative to the moving direction is incorporated into the comparison data, and information about presence or absence of one row of four pixels at the backward side is out of target. Accordingly, the first-in first-out is carried out with respect to the four pixels, so that not all the comparison data need be renewed. Consequently, the pattern matching can be carried out easily and efficiently.
The determination cannot be made on the basis of only the presence or absence of pixels when the pattern matching is applied to a color image. Accordingly, pattern data needs to be prepared for every color, but this is unrealistic. On the other hand, the interpolated image data corresponding to the pattern data may be arranged to include color arrangement information of every color in the comparison data.
In the above-described arrangement, the pixels are matched with the comparison data representative of the presence or absence of pixels. Since the interpolated image information referred to when the pixels have been matched with the comparison data includes the color arrangement information, the interpolation, of the color image by the pattern matching is substantially accomplished by the color arrangement. Consequently, the color image can also be interpolated by the pattern matching.
The synthesizing unit synthesizes image data for which the pixels thereof have been interpolated. In this case, when interpolating unit is arranged to temporally hold the results of interpolating process for every image data in another region, the image data held in the region is superposed in a predetermined sequence. Alternatively, the results of interpolation may be written onto a predetermined output region with the interpolating process being carried out in the predetermined sequence.
Although the synthesizing unit thus synthesizes the image data in the predetermined sequence, the superposition can be performed more satisfactorily according to the character of the interpolating process. As an example, the invention claimed in claim 3 is constructed so that in the image interpolating apparatus of claim 1 or 2, the synthesizing unit includes a margin processing unit which adjusts superposition of margins of the image data after interpolation of the pixels.
In the invention of claim 3 thus constructed, the margin processing unit of the synthesizing unit adjusts the superposition of margins of the image data after the image interpolation.
Since the interpolating process generates new pixels and there are different techniques in the interpolating process, a configuration of the margin varies when different interpolating processes are applied. For example, when there are an interpolating process resulting in a large variation in the marginal configuration and another interpolating process maintaining an original marginal configuration, the superposition is preferably carried out on the basis of the marginal configuration in the latter. In this sense, the margin processing unit adjusts the superposition.
The adjustment by the margin processing unit may be changed according to the interpolating process. As an example, the margin processing unit may superpose a plurality of types of image data for which the pixels thereof have been interpolated, in a sequence predetermined according to the interpolating process.
In the above-described arrangement, the margin is adjusted by superposing the image data in the sequence predetermined according to the interpolating process. In the aforesaid example, there are an interpolating process resulting in a large variation in the marginal configuration and another interpolating process maintaining an original marginal configuration. In this case, the former is first written onto a predetermined output region and thereafter, the latter is overwritten such that the marginal configuration of the latter is used.
As another example, the margin processing unit may first write onto the output region the image data corresponding to an interpolating process in which the margin is expanded.
In the above-described arrangement, when there is image data in which the margin is expanded as the result of the interpolating process, the margin processing unit first writes that image data to be interpolated by the interpolating process onto the output region. The margin is narrowed or expanded depending upon the interpolating process. The margin intrudes into an adjacent region when expanded. Accordingly, the image data is first written onto the output region and the marginal portion is overwritten so that a percentage of the portion intruding into the adjacent region is substantially reduced, whereupon the marginal configuration is maintained.
Furthermore, the margin processing unit may write the image data for which an interpolating process in which the marginal configuration is smoothed is carried out, later than the image data for which another interpolating process is carried out.
When there are the interpolating process in which the marginal configuration is smoothed and another interpolating process, the interpolating process in which the marginal configuration is smoothed can maintain the marginal configuration. Accordingly, when the image data corresponding to the interpolating process in which the marginal configuration is smoothed is first written onto the output region, the marginal configuration which should not be maintained is maintained, which is inconvenient. Accordingly, the image data corresponding to the interpolating process in which the marginal configuration is smoothed is later written onto the output region.
Whether the marginal configuration is maintained depends upon a purpose. For example, the margin is easily smoothed in the pattern matching. There is a case where the marginal configuration is maintained when such smoothing is carried out. On the other hand, although shagginess becomes more conspicuous as an interpolating scale factor is increased, as in the nearest method, the marginal configuration can be maintained.
Furthermore, the invention claimed in claim 4 is constructed so that in the image data interpolating apparatus of claim 3, when the readout unit reads out the image data, the margin processing unit causes the readout unit to read out the image data with a margin enlarged, thereby superposing the image data on the image data interpolated by the interpolating unit on the basis of the image data with the enlarged margin.
In the invention of claim 4 thus constructed, when the readout unit reads out the image data, the margin processing unit causes the readout unit to read out the image data with a margin enlarged. The image data is then superposed on the image data interpolated by the interpolating unit on the basis of the image data with the enlarged margin.
The resultant interpolation image data is expanded when the original image data is previously expanded. Then, even when the margin of the resultant interpolation image data adjacent to the expanded image data does not match that of the expanded image data, that margin is reserved as a foundation and the missing of pixels is prevented.
Further, the invention claimed in claim 5 is constructed so that in the image data interpolating apparatus of claim 4, the margin processing unit enlarges the margin of the image data with respect to an interpolating process in which information outside the margin is drawn in.
In the invention of claim 5 thus constructed, when there is an interpolating process in which information outside the margin is drawn in, the margin is enlarged as described above and the interpolating process is then performed. The post-interpolation image data is written onto the output region etc. When the interpolating process draws in information outside the margin, information is diluted since information of a part containing no pixels is drawn in, which influences the margin. On the other hand, when the margin is previously enlarged, the influenced margin is concealed under the margin of the adjacent image data, whereupon the influence is eliminated.
There are various types of image data, which can mainly be classified into metacommand image data and non-metacommand image data. As an example suitable for such a case, the invention claimed in claim 6 is constructed so that in the image data interpolating apparatus of any one of claims 1 to 5, said plurality of types of image data having different image types include image data corresponding to a metacommand and other image data, which further comprises a non-metacommand pixel interpolating unit which enlarges a marginal region when the pixel corresponding to the image data other than the metacommand and performs an interpolating process so that a predetermined interpolation scale factor is obtained, and a metacommand pixel interpolating unit which generates an interpolated pixel so that the interpolated pixel corresponds to the original metacommand when reading out the pixel corresponding to the metacommand and performing an interpolating process so that the interpolating scale factor is obtained, wherein the synthesizing unit synthesizes a result of interpolation by the non-metacommand pixel interpolating unit and a result of interpolation by the metacommand pixel interpolating unit, the synthesizing unit preferring the result of interpolation by the metacommand pixel interpolating unit with respect to the superposed portion.
In the invention of claim 6 thus constructed, the non-metacommand pixel interpolating unit reads out from the virtual region etc. the pixels corresponding to the image data other than the metacommand and performs an interpolating process so that a predetermined interpolation scale factor is obtained. In this case, the interpolating process is performed with the marginal region being enlarged. Accordingly, an interpolated image enlarged relative to the original region is obtained. On the other hand, the metacommand pixel interpolating unit also reads out from the virtual region etc. the pixels corresponding to the metacommand and performs an interpolating process so that the predetermined interpolation scale factor is obtained. In the interpolating process, the interpolated pixels are generated so as to correspond to the original metacommand. The synthesizing unit synthesizes a result of interpolation by the non-metacommand pixel interpolating unit and a result of interpolation by the metacommand pixel interpolating unit. The synthesizing unit prefers the result of interpolation by the metacommand pixel interpolating unit with respect to the superposed portion.
More specifically, the metacommand image is a mass of dot-matrix pixels in the virtual region etc. The metacommand image has a smoothed contour corresponding to the original metacommand in the interpolating process, but the contour is necessarily changed from that before the interpolating process. When the contour is thus changed, a part of the metacommand image may be superposed on the adjacent image other than the metacommand or a gap may be formed. On the other hand, the image other than the metacommand is generated so as to be larger than the original and accordingly, there is less possibility of occurrence of the gap and the image of the metacommand is preferred in the superposed portion. Consequently, a smoothed contour remains.
The metacommand used here means a vectorial representation of shape. Accordingly, a drawing application would translate the command, drawing a graphic, and the image quality is not deteriorated eve if enlargement or reduction are repeated. On the other hand, information about each pixel is given when the image other than the metacommand is drawn. The information is lost when the image is reduced and cannot be returned even when the image is enlarged. In this sense, the metacommand is used for characters as well as for images.
The characteristic of the metacommand having such a property cannot always be maintained in the processing by a computer. Accordingly, when the metacommand is represented as a mass of pixels at one time, it needs to be subjected to the same processing as applied to the other image thereafter. However, when whether an image has been drawn by the metacommand, the interpolating technique needs to be changed in the interpolating process in which the image is enlarged. For example, it is considered that an image by the metacommand should generally be interpolated with a margin being smoothed. On the other hand, whether the margin should be smoothed cannot be determined unconditionally regarding the other image. Accordingly, when the metacommand and non-metacommand images are interpolated in the manners different from each other, marginal configurations may differ after the interpolating processes. This is the background of the present invention.
The virtual drawing unit carries out drawing on the virtual region on the basis of the image data corresponding to the metacommand and the other image data. The virtual drawing unit is capable of recognizing the image data corresponding to the metacommand and the other image data from each other. Various techniques rendering the recognition possible may be employed. For example, an attribute area may separately be provided so that types of the individual data in the virtual region are written thereon, or the individual data may be provided with respective attributes. Further, when the number of colors can be reduced, a certain bit can be applied to the attribute.
The non-metacommand pixel interpolating unit reads out from the virtual region the pixels corresponding to the image data other than the metacommand, carrying out the interpolating process. For example, when the type of the image data can be determined from the attribute area, the non-metacommand pixel interpolating unit reads out the pixels corresponding to the image data other than the metacommand while making reference to the attribute area to select the image data. The non-metacommand pixel interpolating unit further interpolates the pixels by the corresponding interpolating process.
Various interpolating manners can be employed. For example, the interpolating process by the cubic method is suitable for the natural image, whereas the nearest method is suitable for the non-natural image such as computer graphics.
The non-metacommand pixel interpolating unit enlarges the peripheral edge region and then carries out the interpolating process. Various processing manners for enlarging the peripheral edge region are employed. As an example, the invention claimed in claim 7 is constructed so that in the image data interpolating apparatus of claim 6, the non-metacommand pixel interpolating unit uses information about the pixel in the marginal region as information about a pixel outside the marginal region.
In the invention of claim 7 thus constructed, since the peripheral edge region to be enlarged contains no information about pixels, information about pixels in a peripheral region is used as information about pixels in a region to be enlarged. Accordingly, the information may be copied without change or with stepwise changes. Further, copying may be carried out using a working region. Thus, an actually copying work may not be carried out and it is sufficient that the information is substantially usable. The size of the region to be enlarged is not limited particularly. The size of the region depends upon irregularity of the margin resulting from generation of pixels by the metacommand pixel interpolating unit. Even when the metacommand pixel has a concavity, the concavity is allowed to such an extent that the concavity does not result in a gap in the superposition of the image data. For example, when pixel information is read out for every line from the virtual region, a processing for enlarging both ends of the metacommand pixel by one pixel is sufficient.
The metacommand pixel interpolating unit selectively reads out the pixel corresponding to the metacommand from the virtual region. The metacommand pixel interpolating unit generates an interpolation pixel corresponding to the original metacommand. For example, when the pixel is taken as an image, smoothing a margin or rendering a corner acute corresponds to this processing.
On the other hand, a metacommand representative of a character contains information about a figure bending in a complicated manner in a small region. Accordingly, an extra dot tends to be generated depending upon an operational accuracy in generation of pixels. As an example suitable for this characteristic, in the interpolating process for the metacommand representative of the character, a noise pixel in the generation of pixels on the basis of the metacommand may be deleted and the pixels may then be interpolated.
In the above-described construction, determination is made as to whether the pixel is a noise pixel when the pixels are generated on the basis of the metacommand representative of the character. When the pixel is a noise pixel, the noise pixel is deleted and the pixel is then interpolated. Whether the pixel is a noise pixel can generally be determined from a property of the character. For example, one pixel projects in spite of a straight portion. An unnaturally projecting pixel is in a portion where two sides intersect. Or an unnaturally projecting pixel is at an end of a curve. That is, the noise pixel means those which can be produced at a connection in the case where a character is represented by a plurality of vector data.
On the other hand, a superposing unit synthesizes the result of interpolation by the non-metacommand pixel interpolating unit and the result of interpolation by the metacommand pixel interpolating unit and causes the result of interpolation by the metacommand pixel interpolating unit to take preference over the result of interpolation by the non-metacommand pixel interpolating unit concerning a superposed portion.
As an example of the processing in which one takes preference over the other, the invention claimed in claim 8 is constructed so that in the image data interpolating apparatus of any one of claims 1 to 7, the synthesizing unit synthesizes the pixels, superposing pixels in the result of interpolation by the metacommand pixel interpolating unit other than background pixels on the result of interpolation by the non-metacommand pixel interpolating unit.
In the invention of claim 8 thus constructed, the results of interpolating processes held in another area may be superposed in a predetermined sequence if the result of interpolation by the non-metacommand pixel interpolating unit and the result of interpolation by the metacommand pixel interpolating unit are preliminarily held in the another area. Alternatively, the results of interpolation may be superposed while the interpolating processes are carried out in a predetermined sequence.
The aforesaid image data interpolating technique should not be limited to the substantial apparatus. It can easily be understood that the technique is functioned as a method.
The aforesaid image data interpolating apparatus may exist independently or may be incorporated in equipment. Thus, the scope of the invention covers various forms of implementation. Accordingly, the invention may be implemented as software or hardware.
When the invention is implemented as software for an image data interpolating apparatus, the invention applies equally to a medium on which the software is recorded.
When the invention is implemented as software, hardware and operating system can be employed or the invention may be implemented independent of them. For example, a process for inputting image data for the interpolation can be accomplished by calling a predetermined function in an operating system or by inputting from hardware without calling the function. Even when the invention is actually implemented under interposition of hardware, it can be understood that the invention can be implemented only by a program in a stage of recording the program on a medium and circulating the medium.
The recording medium may be a magnetic recording medium or a photomagnetic recording medium, or any recording medium that will be developed in the future. Further, the invention may take such a replicated form as a primary replicated product, secondary replicated product, etc. In addition, the invention may be supplied through use of a communication line.
Still more, there may be provided such as arrangement that some parts of the present invention are embodied in software while the other parts thereof are embodied in hardware. In a modified embodiment of the invention, some parts thereof may be formed as software recorded on a storage medium to be read into hardware as required.