Conventionally, a JPEG system using discrete cosine transform (to be abbreviated as DCT hereinafter) and a system using Wavelet transform have been often used as still image compression systems. Since coding systems of this type are variable-length coding systems, the code amount changes from one image to be encoded from another.
In a JPEG system as an international standardized system, only one set of quantization matrices can be defined for an image, so the code amount cannot be adjusted without any prescan. Therefore, when this method is used in a system in which data is stored in a limited memory, memory overflow may occur.
To prevent this, a sufficient memory capacity must be secured. However, the sizes of input images are not unconditionally equal but are different in some cases. Accordingly, it is necessary to secure a memory having a capacity suitable to a maximum possible input size.
Unfortunately, in an apparatus in which a memory is thus secured in accordance with the maximum size, this memory is secured even when images smaller than the maximum size are input. That is, the memory cannot be effectively utilized.
It is, therefore, possible to secure a memory capacity in accordance with medium-sized images. In this case, however, coded data obtained by encoding exceeds this memory capacity.
As countermeasures against this inconvenience, the following methods are known. In one method, if a predetermined code amount is exceeded, the compression ratio is changed, and the original is reread. In another method, a code amount is estimated beforehand by prescan, and quantization parameters are reset to adjust the code amount.
In a conventionally known code amount control method which performs prescan, pre-compressed data is input to an internal buffer memory and expanded, and the expanded data is finally compressed by changing compression parameters and output to an external memory. In this method, the compression ratio of the final compression must be higher than that of the pre-compression.
In another known method, for example, an allowable code amount for each pixel block is obtained, and a coefficient obtained by level-shifting the DCT coefficient n times is Huffman-coded to decrease the code amount. The shift amount n is determined on the basis of the allowable code amount.
However, the method of executing prescan and final scan is time-consuming because an original is scanned twice.
Image information contains not only original image data but also image region information that accompanies the image data. The image region information is mainly used to execute color processing or adjust the number of tone levels by an image output unit such as a printer engine to obtain a fine-looking output image. A natural image contains chromatic colors and achromatic colors while a document original contains many black characters. When the type of black ink to be used is changed depending on the type of image, a natural image can be made more natural, and characters can be output more sharply.
In this way, 1-bit attribute flag data is added to each pixel to indicate that the pixel has a chromatic or achromatic color or the pixel corresponds to a character portion. Accordingly, when the image is output and, more particularly, printed, the image quality can be increased. The image region information also contains information other than the above-described information.
Hence, when image information should be compressed, not only the image data but also the image region information must be compressed. To make the compressed and encoded data have a target size or less, scan must be executed twice as prescan and final scan.
To solve this problem, the assignee of the present inventors has already proposed several techniques for encoding an image substantially within the encoding process time of one cycle using a small memory capacity. These techniques will be described later in detail, and only a brief description will be made here. When the code amount is going to exceed the target memory capacity, the compression ratio of the encoder is increased. In addition, data that has already been encoded is re-encoded to be equivalent to that compressed at the newly set compression ratio.
Although the above process is very effective, the re-encoder that executes the re-encoding process needs to have a high process capability. If the re-encoding process capability is low, the compression code amount at the newly set compression ratio may exceed the target value before the end of re-encoding. If this occurs, the re-encoding process cannot be activated. As a result, compression encoding fails.
For the above reason, a high re-encoding process capability is required. In fact, the above-described case occurs relatively rarely. Hence, a high process capability for re-encoding is excessive and leads to an increase in cost.