There has been known a technique of generating both (1) text image data that represents text and (2) background image data that does not include any text, based on original image data that represents an original image including the text. In the known technique, the text image data that represents the text (e.g., binary data) is generated based on the original image data, and the text image data is formed from text pixels constituting the text and background pixels not constituting the text. In the known technique, a color of pixels of the original image data corresponding to the text pixels of the text image data is changed to a color of pixels of the original image corresponding to the background pixels of the text image data to create the background image data. Thus, the background image data is generated based on the original image data. The text image data is compressed by a compression method suitable for compression of the text image data (e.g., the Modified Modified Read (“MMR”) method) and the background image data is compressed by another compression method suitable for compression of the background image data (e.g., the Joint Photographic Experts Group (“JPEG”) method). By separating background image data from text image data, the entire original image data can be compressed at higher compressibility.
For example, when sharpness of text in an original image is low, in other words, when text is blurred, there may be a possibility that a text outline remains in a background image. In the known technique, the background image data is generated using text image data in which a line width of text is increased, resulting in a reduction of the possibility that the text outline remains in the background image.