1. Field of the Invention
The present invention relates to an image enlargement method, image enlargement apparatus, and image forming apparatus for performing enlargement processing by resolution conversion at an arbitrary magnification for digital image data input by a scanner device or image input device.
2. Description of the Related Arts
The chance to output images is rising along with the prevalence of image input devices for reading image data, such as a scanner device, and image sensing devices such as a digital camera. For example, it is becoming popular to output images by multifunction peripherals (to be referred to as MFPs) in the office and home. In particular, devices with an image sensing function for personal use, typified by a digital camera and cell phone, actively perform edition or output of image data using storage media such as an SD card.
The number of image sensing pixels of digital image data in these image sensing devices are increasing year by year. However, the total pixel count is not sufficient, compared to the print resolution of an image output device or the print medium size. In some cases, sensed image data is output in a large size. The demand has arisen for an enlargement processing function to output input image data at higher quality.
Image data enlargement methods include an SPC method, linear interpolation method (bi-linear method), and bi-cubic method. The SPC method and linear interpolation method implement an enlargement processing function by relatively simple configuration requirements. The bi-cubic method provides an enlargement processing function at high image quality though the configuration requirements are complex.
In enlargement processing by an image output device such as an MFP, the bi-cubic method enables enlargement processing at high image quality. However, the bi-cubic method is mounted mainly in high-level function products owing to high control complexity and a large circuit scale. Output from a digital camera and the like uses not only a high-level function MFP but also a home MFP. Most enlargement processing methods installed in such low-end MFPs are not the bi-cubic method in terms of the configuration requirements.
The SPC method and linear interpolation method will be explained with reference to the accompanying drawings.
(Concept of SPC Method)
The SPC method implements the enlargement function most easily. FIG. 18 shows the concept of one-dimensional enlargement processing by the SPC method.
In FIG. 18, pixels S0, S1, S2, and S3 represented by quadrangles form an input image. The density of the quadrangle indicates a pixel value. The black pixels S0 and S3 mean that these pixels are black (high density). The white pixels S1 and S2 mean that these pixels are white (low density). In the example of FIG. 18, the input image is made up of four pixels, and pixels at the two ends are black. SL represents the distance between constituent pixels of the input image.
When the input image data of four pixels is enlarged one-dimensionally at 2.5-magnification, the output pixel count is 10 (=4×2.5). Since output image data is formed by referring to input image data, a distance DL between constituent pixels of the output image is smaller than the distance SL between constituent pixels of the input image (DL<SL/2 because of 2.5-magnification enlargement). According to the SPC method, an input pixel value closest to an output pixel position is output as an output pixel. Hence, the first pixel S0 (black pixel) of the input image is output as the first pixel D0 and second pixel D1 of the output image. The second pixel S1 (white pixel) of the input image is output as the third pixel D2 and fourth pixel D3 of the output image. Similarly, the third pixel S2 of the input image is output as the fifth pixel D4, sixth pixel D5, and seventh pixel D6 of the output image. The fourth pixel S3 of the input image is output as the eighth pixel D7, ninth pixel D8, and 10th pixel D9.
The arrangement of enlargement processing by the SPC method is simple, but an image quality after enlargement processing is not high because the processing is simple as well. More specifically, simple duplication generates steps (stagger) in output image data, failing to implement smooth enlargement processing.
(Concept of Linear Interpolation Method)
The linear interpolation method is a method for solving the problems of the SPC method. In enlargement processing, two input pixel values closest to an output pixel position are referred to. The distance between a reference pixel and an output pixel position is calculated as a weighting coefficient.
FIG. 19 shows the concept of one-dimensional enlargement processing by the linear interpolation method. In FIG. 19 as well as FIG. 18, pixels S0, S1, S2, and S3 represented by quadrangles form an input image. As for a value (density) indicated by each input pixel, the black pixels S0 and S3 mean that these pixels are black (high density), and the white pixels S1 and S2 mean that these pixels are white (low density), similar to FIG. 18.
Also in FIG. 19, the input image is made up of four pixels, and pixels at the two ends are black. The distance between constituent pixels of the input image is SL.
When the input image data of four pixels is enlarged one-dimensionally at 2.5-magnification, the output pixel count is 10. Also in this case, output image data is formed by referring to input image data. Similar to FIG. 18, the distance DL between constituent pixels of the output image is smaller than the distance SL between constituent pixels of the input image (DL<SL/2 because of 2.5-magnification enlargement).
As the first pixel D0 of the output image, the first pixel S0 of the input image is output directly. The next pixel spaced apart by the distance DL between constituent pixels of the output image, that is, the second pixel D1 is obtained by calculation by referring to two input pixels S0 and S1 closest to the second pixel position.
The position of the second pixel D1 of the output image is spaced apart from the first pixel S0 of the input image by the distance DL. The position of the second pixel D1 of the output image is spaced apart from the second pixel S1 of the input image by a distance obtained by subtracting the distance DL between constituent pixels of the output image from the distance SL between constituent pixels of the input image. From this, the value of the second pixel D1 of the output image is given by equation (1) using the two input pixel values S0 and S1 close to the output pixel position, the distance DL to the second pixel position of the output image, and (SL−DL):D1={S0×(SL−DL)+S1×DL}/SL  (1)
In equation (1), the reason for division by the distance SL between constituent pixels of the input image is that the distance to the output pixel position is calculated as a weight when calculating the D1 value. In many cases, the SL value is an exponent with a base of 2 to simplify the division circuit.
Subsequently, the value of the third pixel D2 of the output image is calculated. The third pixel position of the output image comes between the first pixel S0 and second pixel S1 of the input image, similar to the second pixel position of the output image. This is because the magnification of enlargement processing is 2.5 and the distance DL between constituent pixels of the output image is smaller than SL/2.
The position of the third pixel value D2 of the output image is spaced apart from the first pixel S0 of the input image by the distance DL further from the position of the second pixel value D1 of the output image, that is, by (DL×2). The position of the third pixel value D2 of the output image is spaced apart from the second pixel S1 of the input image by a distance obtained by subtracting (DL×2) from the distance SL between constituent pixels of the input image. That is, the value of the third pixel D2 of the output image is given using the two input pixel values S0 and S1 close to the output pixel position, the distance (DL×2) to the third pixel position, and (SL−DL×2):D2={S0×(SL−DL×2)+S1×DL×2}/SL  (2)
Next, the value of the fourth pixel D3 of the output image is calculated. Unlike the preceding output pixel positions, the fourth pixel position of the output image comes between the second pixel S1 and third pixel S2 of the input image.
Shift of a reference pixel in the linear interpolation method is often determined by accumulating distances DL between pixels which form an output image. In the example of FIG. 19, distances DL between pixels which form an output image are accumulated. The timing when the sum exceeds SL is set as a timing to update the reference pixel position of an input image.
This will be explained using concrete numerical values. The distance SL between constituent pixels of an input image is assumed to be “256 (28)”. In this case, the distance DL between constituent pixels of an output image in 2.5-magnification enlargement is attained by dividing SL by the magnification:DL=SL/2.5  (3)
The fractional value of the calculation result in equation (3) is rounded down, obtaining a DL value “102”. A target pixel position in the output image is three pixels after the first pixel output and is indicated by “306” as a result of accumulating DL three times. The accumulated DL exceeds SL (“256”), and the reference pixel position of the input image is updated to calculate an output image value.
When the distance SL between constituent pixels of an input image is represented by an exponentiation with a base of 2, for example, the carry of an accumulator triggers update of the reference pixel position of the input image in output pixel calculation. The output value of the accumulator after carry output can be used to calculate the distance to the output pixel position upon updating the reference pixel position of the input image. In the foregoing example, the distance from the reference pixel S1 of the input image to the fourth pixel position D3 of the output image is obtained by subtracting the distance SL (“256”) between constituent pixels of the input image from the DL accumulated value “306”. When the accumulator is configured by 8 bits defined as the distance SL, the output value of the accumulator after carry output is “50”. Therefore, the distance from the reference pixel S2 of the input image to the fourth pixel position D3 of the output image is “206” obtained by subtracting “50” from the distance SL (“256”) between constituent pixels of the input image.
In the same way, the reference pixel position of the input image is updated in accordance with the pixel position of the output image. Output image data is calculated using, as a weighting coefficient, the distance from the reference pixel position to the pixel position of the output image.
For example, Japanese Patent Laid-Open No. 9-326958 discloses control of the pixel position of an output image and the reference pixel position of an input image in the linear interpolation method. This Japanese Patent Laid-Open No. 9-326958 describes calculation of the distance (weight) of the output pixel position and memory address control for easily reading input reference pixel data used in calculation.
The linear interpolation method proposes several methods in terms of a trailing pixel. In the example shown FIG. 19, a virtual pixel DS0 is arranged next to the fourth pixel S3 which forms the input image. In this case, the values of the pixels D8 and D9 of the output image that use the virtual pixel DS0 as a reference pixel change depending on the virtual pixel value. More specifically, the virtual pixel DS0 is set to have the same value as the immediately preceding pixel value S3 or a fixed value (white/black). The values of the pixels D8 and D9 are attained by interpolation calculation referring to the virtual pixel value and the last pixel S3 of the input image. As another method, the last pixel S3 which forms the input image is directly output without calculating the trailing edge of the output image, that is, D9 and D8.
The linear interpolation method described with reference to FIG. 19 suffers the same problems as those of the SPC method when the enlargement magnification is 2 or 4. This arises from direct output of the first pixel S0 of the input image by using the first pixel D0 after enlargement processing as the reference point of the output image.
For example, when the enlargement magnification is 2, the distance SL between constituent pixels of an input image and the distance DL between constituent pixels of an output image satisfy the relation of DL=SL/2, as is apparent from the calculation result of equation (3). This means that the output image is in phase (distance is 0) with pixel data which forms the input image for every two pixel outputs of the output image. In other words, even if interpolation calculation is done using two input pixels close to an output pixel position as reference pixels in calculation of an output pixel value, an input pixel is directly generated as an output pixel for every two pixels.
As a method for avoiding this problem, an initial phase is set in the linear interpolation method. FIG. 20 is a conceptual view of a linear interpolation method which sets an initial phase. In FIG. 20, the same reference numerals as those in FIG. 18 or 19 have the same meanings.
A feature of linear interpolation shown in FIG. 20 is that an initial phase is set for the output position of the first pixel D0 of an output image in a direction (rightward in FIG. 20) in which an image is formed. The initial phase is “INIT” in FIG. 20.
As a feature of the linear interpolation method shown in FIG. 20, the first pixel position of the output image is out of phase with that of the input image. Since the initial phase value is set for the first pixel D0 of the output image, the first pixel S0 of the input image is not directly output.
When the initial phase “INIT” is set to “30”, the first pixel value of the output image is calculated based on the set initial phase value (“30”) and a value (“226”) obtained by subtracting the initial phase value (“30”) from the distance (“256”) between constituent pixels of the input image. Since the first pixel value of the input image represents black (“0”) and the second pixel value represents white (“255”), the first pixel D0 of the output image at the initial phase value isD0={0×(256−30)+255×30}/256≈29
The second pixel D1 of the output image is a pixel position obtained by adding the distance DL (“128”) between constituent pixels of the output image to the first pixel D0:D1={0×(256−158)+255×158}/256≈157
The third pixel D2 of the output image is a pixel position obtained by further adding the distance DL (“128”) between constituent pixels of the output image to the second pixel D1. The third pixel D2 is a pixel position (“286”) obtained by adding DL×2 (=“256”) to the initial phase (“30”). This pixel position based on accumulation exceeds the distance SL (“256”) between constituent pixels of the input image. Thus, a carry occurs in the accumulator, and a value (“30”) obtained by subtracting “256” is output as an accumulation result.
In the example of FIG. 20, the second pixel S1 and third pixel S2 of the input image have the same value (white), so output pixel values by the weighting calculation become equal to input pixel values. However, if successive pixel values of an input image differ from each other in an image to be enlarged, the weighting calculation can provide smooth output pixel values.
Also in the example of FIG. 20, the virtual pixel DS0 is assumed next to the last pixel S3 of the input image. The value of the virtual pixel DS0 affects the seventh pixel D6 and eighth pixel D7 of the output image.
Setting an initial phase can avoid the problem that the pixel value of an input image is directly output as that of an output image at a magnification setting of 2 or 4. Note that Japanese Patent Laid-Open No. 4-362792 discloses a linear interpolation method which sets an initial phase. In particular, this Japanese Patent Laid-Open No. 4-362792 describes improvement of the SPC method by setting an initial phase “0” at an equal magnification and superimposing the first pixel of an output image at a middle point between the first and second pixels of an input image at other magnification settings.
However, the conventional linear interpolation method executes enlargement processing for the first pixel position of an output image in a direction in which image data are formed using the first pixel position of an input image as a reference. The first pixel position of the output image is in or out of phase with the first pixel position of the input image in the direction (between the first and second pixel positions of the input image) in which image data are formed. Resultantly, the whole image data after enlargement processing is rasterized with an offset in the direction (rightward) in which image data are formed.
That is, in all the examples shown in FIGS. 18, 19, and 20, input image data is horizontally symmetrical and only one pixel at each end is black pixel data. In contrast, output image data after enlargement processing is horizontally asymmetrical pixel data.
More specifically, in FIG. 18, black pixel data of two pixels are generated at the left end of output image data, and those of three pixels are generated at the right end. In FIG. 19, three pixels at the left end of output image data are not a little affected by the first pixel data (black) of the input image, and four pixels at the right end are affected by the fourth pixel data (black) of the input image.
This trend becomes more serious when the initial phase is set. In FIG. 20, pixel data of the output image affected by a black pixel in input image data are two pixels at the left end and four pixels at the right end. The conventional linear interpolation methods shown in FIGS. 18, 19, and 20 cannot achieve enlargement processing while keeping the symmetry of input image data.
Problems of the linear interpolation method are not limited to horizontal asymmetry. FIGS. 21A to 21C show problems generated in two-dimensional image data. FIG. 21A shows input image data before performing enlargement processing. The input image data is formed from black pixels with a width of N pixels each at upper, lower, right, and left ends. When this input image data is enlarged at a magnification P, the black pixel width should be a pixel width of N pixels×magnification P at all the ends even in output image data as long as horizontal and vertical symmetries are maintained.
However, when no initial phase is set in the linear interpolation method, output image data after enlargement have asymmetrical black pixel widths in the horizontal and vertical directions, as shown in FIG. 21B. In FIG. 21B, Xa is the left black pixel width of the output image data and Xb is the right black pixel width of the output image data. Ya is the upper black pixel width of the output image data and Yb is the lower black pixel width of the output image data.
As shown in FIG. 21B, black pixel widths Xa and Xb at the left and right ends are not equal in two-dimensional image data. Also, black pixel widths Ya and Yb at the upper and lower ends are not equal. Particularly when the number of constituent pixels of input image data in the lateral direction (to be referred to as a main scanning direction) differs from that of constituent pixels in the longitudinal direction (to be referred to as a sub-scanning direction), reproduced black pixel widths in the horizontal and vertical directions also differ from each other. That is, even Xa and Ya, and Xb and Yb are different from each other. This arises from a phase shift in the linear interpolation method owing to the difference between the main scanning direction and the sub-scanning direction in the count, determined by the magnification, at which distances DL between constituent pixels of an output image are accumulated. The reproduced black pixel width of the output image also changes depending on a set magnification.
When the initial phase is set in the linear interpolation method, the asymmetry of output image data after enlargement further worsens, as shown in FIG. 21C. This is because, when the initial phase is set and the first pixel position of an input image is set as a reference, the first pixel position of an output image shifts right (lower right for two-dimensional image data), compared to setting no initial phase. In this case, the difference between Xa and Xb and that between Ya and Yb become larger than those when setting no initial phase.
If a device for reading image data is capable of enlargement in the main scanning direction and sub-scanning direction, like a scanner device, extra data can be added to the left or upper side as input image data before enlargement processing by the linear interpolation method. In this case, the reference point can be shifted. However, no extra data can be added to input image data obtained by an image sensing device such as a digital camera. Within the device, modification such as addition of extra data corresponding to the magnification of enlargement processing is necessary using a medium holding input image data or another memory.
An increase in the number of processes leads to poor performance of the device regardless of hardware processing or software processing. When image data obtained by an image sensing device such as a digital camera has horizontal or vertical symmetry, the asymmetry of output image data having undergone enlargement processing by linear interpolation is more enhanced at a larger magnification.
The above description is based on an image forming apparatus. However, these problems also occur in enlargement (resolution conversion) by another image processing.