This invention relates generally to the processing of grayscale image data. More particularly, this invention relates to the conversion of grayscale image data containing text and line art features on a non-white background into a halftone image.
Grayscale images use many shades of gray to represent an image. Halftone images, in contrast, typically use black and white dots to form an image. The pattern and density of the black and white dots is varied to represent different shades. Problems arise when converting a grayscale image into a halftone image when the grayscale image background is a non-white color or the foreground color is not black. Current methods of converting grayscale image into halftone image data result in the text and line art features in the grayscale image on a non-white background being obscured when converted into a halftone image. When converting a grayscale image which contains many different shades of gray, the conversion algorithm must determine whether to fill the corresponding position on the halftone image being created with a black dot or a white space. Current algorithms used in the conversion process have difficulty picking out the edge of text and line art features in grayscale images from backgrounds which are not white, resulting in lack of definition of the text and line art features in the new halftone image.
The present invention addresses problems caused by the current methods of converting grayscale image data into halftone images when the grayscale image data has a non-white background and contains text and line art features. One embodiment of the invention utilizes a conversion algorithm which uses a combination of pixel averages formed from different subsets of pixel values of pixels located within the grayscale image data. The different pixel averages are weighted differently depending on the particular characteristics of the area of the grayscale image data being converted to halftone image data. Pixels are examined one pixel at a time in the grayscale image. The pixel being examined is the focus pixel. Running averages are maintained for both the average pixel value of pixels located in the row of pixels containing the focus pixel and the average pixel value of pixels located in the column pixels containing the focus pixel. Additionally, a pixel window is superimposed over the grayscale image area near the focus pixel, and the average pixel value of the pixels in that pixel window is tracked. The conversion algorithm examines the area near the focus pixel in the grayscale image data for edges indicative of text and line art features and adjusts the weight given to the different averages depending upon whether or not an edge indicating a text or line art feature was found in the grayscale image data. If an edge is detected in the grayscale image, the conversion algorithm gives more weight to the local pixel average. If, on the other hand, the presence of an edge is not detected in the area of the focus pixel, more weight is given in the conversion algorithm to the running horizontal and vertical averages. After assigning the proper weight to the averages, the conversion algorithm produces a threshold value which is compared against the pixel value and used to determine whether that pixel will be depicted as a 1, that is a black dot, or a 0, that is a white space in the halftone image. By shifting the weight given to the various pixel averages depending upon the presence or non-presence of an edge, the conversion algorithm is better able to isolate text and line art features from a non-white background in a grayscale image.