In the state of the art, images are digitally represented in basically two ways. A first way is the description of graphic elements that constitute the image. For example, in case of a line, a description of its thickness, its colour, and the positions of the two end points is given. The descriptive elements that are used are part of a so-called page description language, such as PCL and PostScript. These images are generated for example by computer applications, such as word processing or computer aided design applications. A second way is the definition of pixels and the specification of the colour of these pixels in the amount of standard colorants. Each pixel is associated with a position in the image. An image having pixels is also known as a rasterized image. For a monochrome image, only one colorant is involved and the value of a pixel comprises one number, indicating the density of the pixel. If the image is a full colour image, each pixel is characterized by a combination of numbers, each indicating the density of the pixel in one of the colorants. For example, a combination of densities for red, green, and blue for each pixel characterizes an RGB image. In the same way a combination of densities for cyan, magenta, yellow, and black for each pixel forms a CMYK image. For image processing purposes these colour images may be viewed as the combination of several monochrome images, one for each colorant, that may be processed individually or jointly. These monochrome images are continuous tone images when the number of values a pixel can assume is large enough to represent all the relevant grey tones. Usually the 256 values for 8-bit pixels are considered to be sufficient for continuous tone images. Examples of rasterized, continuous tone images are images that stem from scanning a hardcopy original and images that originate from digital cameras. Combinations of these two image types also occur in mixed images that contain both descriptive elements, or vector elements, and rasterized images in the form of picture elements. It is noted that at the time of printing, images that comprise descriptive elements are often transposed into rasterized images by a Raster Image Processor.
Depending on the optical characteristics of the imaging device that generated the rasterized image, a digital image may need some processing in order to make it more suitable for the intended purpose. Especially in the case a digital image is printed, steep transitions in the amount of ink or toner that is deposited on the receiving material on one side of an edge relative to the other side, are preferred. In text rendering this enhances the straightness of the character lines and in pictures it enhances the perceived sharpness. Without processing the image may make a blurred impression. The same problem occurs when a continuous tone image is scaled to a larger size. This is e.g. the case when a mixed image comprises a picture element in a lower resolution than the resolution in which the complete image is rasterized. It may also occur when an image having vector elements is rasterized in a lower resolution than the addressability of the printer. E.g. when an image is rasterized to 300 pixels per inch (ppi) and a printer has an addressability of 600 dot per inch (dpi), one pixel has a value that is used for two dots, so the pixel is addressed twice.
The image processing for blurred images is well established. Image filtering comprising sharpening kernels are available in virtually every image processing application for personal computers. These comprise linear finite impulse response filters that generate a pixel value as a function of the original pixel value and the pixel values of pixels in the direct neighbourhood of the original pixel. A non-linear variant of a sharpening algorithm was described in U.S. Pat. No. 7,068,852. In this disclosure the difference between an average value of two groups of pixels at either side of a boundary between two pixels is the basis for adjusting the values of the pair of pixels adjacent to the boundary. A gain value depending on the magnitude of the difference value that has been found is used to tune the needed amount of sharpening in the processing step. The advantage over linear methods is a better restriction to edges. That means that other sequences of pixel values, like increasing values in smooth transition areas, are unaffected. A disadvantage of this non-linear method is the need for adjustment of the gain values that has to be made to select the degree of filtering.
Both types of filtering, linear and non-linear, have a tendency to show overshoot. This is the effect that when making the transition at the edge steeper, a number of pixels at a distance somewhat further from the boundary at the side where the pixel values are low, have a value that is below the intended value, and some of the pixels at a distance somewhat further from the boundary at the side where the pixel values are high, have a value that is above the intended value. When black text is rendered on a white background, this overshoot is welcomed, because pixel values beyond the limits of the available range of values are clipped and the resulting edge shows a steep transition from black to white. However, coloured text on a coloured background can not be processed in that way, because a halo of a different colour may appear between the text and the background. Also in pictorial images the overshoot is unwanted and the sharpening is limited to moderate operation. In the case of rescaled rasterized images a frequent cause of unsharpness is the occurrence of multiple pixels with an equal value that is intermediate of the high and low value at both sides of the edge. Also in this case overshoot as the result of enhancing the sharpness is unwanted, because this deteriorates the image compared with the unscaled image.
Therefore the problem of the present state of the art is the occurrence of overshoot. The invention has as its goal to find a sharpening algorithm that shows little or no overshoot.