1. Technical Field
The present invention relates to an image processing apparatus for performing compression/encoding and decoding/decompression of image data. In particular, the present invention relates to an image processing apparatus capable of realizing multi-scalable drawing based on image data encoded with a vector quantization technique by high speed processing; an image display apparatus and image forming apparatus incorporating the image processing apparatus; an image processing method; and a storage medium.
2. Description of Related Art
In order to realize efficient transmission and recording of digital images and a reduction of the storage capacity, various techniques for reducing the amount of data by compression-encoding digital images have been proposed.
In particular, the JPEG (Joint Photographic Experts Group) technique is the most versatile technique as a compression-encoding technique for multi-value images. With the JPEG technique, multi-value image data represented in R(Red)G(green)B(Blue) space is converted into data represented in YCrCb space, and further converted sequentially to image data expressed by frequency components by DCT (Discrete Cosine Transform) processing. In general, if image data represents an image of an object existing in nature and is not an artificially created image, there is a small amount of elements of high frequency components, and there is a high concentration of low frequency components. The JPEG technique uses this characteristic and reduces the data amount of image data by eliminating the elements of high frequency components exceeding a predetermined value, and further compress the image data by encoding the data using Huffman encoding.
As another compression encoding technique, there is a vector quantization technique. The vector quantization technique is performed as follows. When processing image data on the basis of a predetermined block size, for example, a block consisting of 4×4 pixels, block patterns which appear at a high rate in a plurality of blocks of 4×4 pixels, and indices of the respective block patterns are stored in advance (they are called a code book). Then, when actually performing the processing on a block-by-block basis, a block pattern with highest correlation with each block is selected, and data on each block is approximated by replacing it with the index of the selected block pattern. Thus, by replacing data with a prepared block pattern, it is possible to reduce the data amount of image data. Further, by encoding the data, it is possible to compress the image data.
As means for increasing the speed of processing using the vector quantization technique, there is a technique in which inputted pixel patterns are classified by the features of specific images, and the index of an approximate pixel pattern is searched for from the code book. FIG. 1 is an explanatory view showing an example of a code book in which pixel patterns are classified into pixel patterns with an edge and pixel patterns without an edge, based on the presence or absence of an edge as a feature of an image. If an inputted pixel pattern contains an edge, the index of an approximate pixel pattern is searched for from a code book which stores pixel patterns with an edge. On the other hand, if an inputted pixel pattern does not contain an edge, the index of an approximate pixel pattern is searched for from a code book storing pixel patterns composed of flat blocks.
With the use of the vector quantization technique, when decoding (decompressing) compression-encoded image data, it is possible to decode (decompress) the image data by the process of referring to the code book based on the replaced index. Hence, it is possible to realize drawing based on image data at high speed with the vector quantization technique.
Moreover, as one of the compression encoding techniques, there is a technique for more smoothly outputting an image even when scaling is performed according to various resolutions. Japanese Patent Application Laid-Open No. 05-174140 (1993) discloses a technique which calculates a contour vector obtained by smoothing a contour detected from a binary image, scales the contour vector by a desired scale factor when performing resolution conversion, and fills in the pixels represented by either of binary values along the contour vector as a boundary to regenerate the image. Thus, there is disclosed a technique capable of obtaining a high-quality (multi-scalable) binary image even when the image is scaled by a desired scale factor.