Generally, since an image signal has an enormous amount of information, a various methods for compressing the amount of information for storage or transmission have been proposed.
As high-performance coding techniques for coding a still picture or an image signal, JPEG and MPEG, based on DCT (Discrete Cosine Transform), are widely employed (refer to "ISO/IEC CD 10918-1, Digital Compression and Coding of Continuous-tone Still Images, Part 1: Requirements and Guidelines" for JPEG and refer to "ISO/ICE 11172-2:1993, Information Technology--Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to about 1.5 Mbit/s--Part 2 Video" for MPEG). In recent years, a subband coding method wherein a signal is repeatedly divided into frequency bands (subbands) and coded has been proposed and examined for practical use.
Generally, these coding methods employ variable-length coding, such as Huffman coding, and the amount of codes for an image or a unit time is not fixed.
Since codes, produced by these coding methods, are stored in external memory devices utilizing magnetic media or transmitted through a various kinds of communication lines, it is necessary to reduce the codes to a prescribed amount for storage or transmission.
For example, in MPEG, a system for controlling the average amount of codes per a unit time by dynamically changing quantized factors according to the amount of generated codes is put to practical use.
Further, in a coding system called JPEG (baseline system Of JPEG), the amount of codes for each image is decided using a quantization table and a Huffman table, which are determined before coding.
Furthermore, for JPEG, as hierarchical coding systems, progressive coding and hierarchical progression for DCT coefficients are defined. When these systems are employed, hierarchically reproducible codes are obtained while maintaining an image quality at reproduction as high as that provided by the baseline system, with an amount of codes as much as that obtained by the base line system.
Furthermore, in the subband coding, a hierarchically reproducible code sequence is obtained by successively generating codes from a low-frequency component.
A conventional subband coding system will be explained using FIG. 46. In FIG. 46, reference numeral 201 designates a horizontal high-pass filter (hereinafter referred to as HPF), numeral 202 designates a horizontal low-pass filter (hereinafter referred to as LPF), numerals 203 and 204 designate down-sample filters (hereinafter referred to as DSFs) for 1/2 down sampling in the horizontal direction, numerals 205 and 207 designates vertical HPFs, numeral 206 and 208 designate vertical LPFs, numerals 209, 210, 211, and 212 designate DSFs for 1/2 down sampling in the vertical direction, numeral 213 designates a selector for selecting an input signal, numeral 214 designates a quantizer, and numeral 215 designates a variable-length coder (hereinafter referred to as VLC).
A description is given of the operation of the subband coding system.
An original image is input from an input node 200, and a high-frequency component and a low-frequency component in the horizontal direction are extracted by the HPF 201 and the LPF 202, followed by 1/2 down sampling by the DSFs 203 and 204, respectively. Thereafter, from the signal down-sampled by the DSF 203, a high-frequency component and a low-frequency component in the vertical direction are extracted by the HPF 205 and the LPF 206, followed by 1/2 down sampling by the DSFs 209 and 210, respectively. Further, from the signal down-sampled by the DSF 204, a high-frequency component and a low-frequency component in the vertical direction are extracted by the HPF 207 and the LPF 208, followed by 1/2 down sampling by the DSFs 211 and 212, respectively. As a result, four subband HH, HL, LH, and LL are generated.
The characters showing each subband are the first letters of the filters, HPF and LPF, arranged in the order of the filters from the left. For example, LH is a subband obtained by the horizontal direction LPF filtering and the 1/2 down sampling and the subsequent vertical direction HPF filtering and the 1/2 down sampling. Since these four subbands HH, HL, LH, and LL are generated by the vertical and horizontal 1/2 down sampling, the pixel number in each subband is 1/4 of the pixel number in the original image. So, when these subbands are combined, the size is equal to the size of the original image. The selector 213 receives the subbands from the DSFs 212, 211, 210, and 209 and outputs the subbands in the order of LL, LH, HL, and HH. The quantized 214 quantizes each subband with a quantization factor for the subband. The VLC 215 encodes each quantized subband and outputs a variable-length code.
In the above-mentioned quantization step, a higher-frequency subband is subjected to a larger quantization. Therefore, in the variable-length codes generated by the VLC 215, the LL component has the highest percentage and the HH component has the lowest percentage.
Since the subband LL is obtained by subjecting the original image to the vertical and horizontal LPF filtering and the 1/2 down sampling, it becomes a reduced image of the original image.
When the image coded by the above-described subband system is decoded, initially, the subband-coded image is coded on the assumption that only the LL component exists in the image and the coefficients of the LH, HL, and HH components are all 0. Next, the subband-coded image is decoded, assuming that only the LL and LH components exist and the coefficients of the HL and HH components are all 0. By repeating this processing, hierarchical reproduction is realized in the subband coding method.
The above-mentioned subband coding method is called "Wavelet Transform", wherein an image signal is divided into two frequency subbands, i.e., low and high frequency subbands, and, further, the low-frequency band is recursively divided into frequency subbands. The wavelet transform utilizes the property of image data that the low-frequency component thereof has a large amount of information.
Hereinafter, a conventional wavelet transform apparatus will be described in reference to FIGS. 47 and 48 for a case where frequency division into more subbands is performed. FIG. 47 is a block diagram for explaining the frequency division in the wavelet transform. FIG. 48 shows an example of frequency division of an image signal after the wavelet transform.
As shown in FIG. 48, an image signal is divided into ten frequency subbands. In FIG. 47, reference numerals 131, 135, 139, 143, 147, 151, 155, 159, and 163 designate one-dimensional HPFs, numerals 132, 136, 140, 144, 148, 152, 156, 160, and 164 designate one-dimensional LPFs, numerals 133, 137, 141, 145, 149, 153, 157, 161, 165 designate subsamplers for 2:1 subsampling of signals frequency-divided by the LPFs, ana numerals 134, 138, 142, 146, 150, 154, 158, 162, and 166 designate subsamplers for 2:1 subsampling of signals frequency-divided by the HPFs.
A description is given of the operation. Initially, horizontal line data L1 of an input image I1 is frequency-divided by the HPF 131 and the LPF 132 and band-divided by the subsampler 133 and the subsampler 134 to produce a high-frequency component L1' and a low-frequency component L1". Thereafter, similar processing is performed for the entire input image I1 to divide the input image I1 into high-frequency band data I2 and low-frequency band data I3.
Next, vertical line data of the band data I2 is frequency-divided by the HPF 135 and the LPF 136 and band-divided by the subsampler 137 and the subsampler 138 to produce a high-frequency component HH (F1) and a low-frequency component HL (F2). Thereafter, similar processing is performed for the input image I3 to band-divide the input image I3 into a high-frequency component LH (F3) and a low-frequency component LL (F4).
Next, the band data LL (=I4) is subjected to band-division in the horizontal and vertical directions, producing a high-frequency component LLHH (F4) and a low-frequency component LLHL (F5) of band data I5, and a high-frequency component LHLH (F6) and a low-frequency component LHLL of band data I6.
Thereafter, similar processing is performed for the band data LHLL (=I7), producing a high-frequency component LLLLHH (F7) and a low-frequency component LLLLHL (F8) of band data I8, and a high-frequency component LLLLLH (F9) and a low-frequency component LLLLLL (=I10) of band data I9.
According to the successive band division processing mentioned above, the input image is transformed to data divided into ten frequency subbands as shown in FIG. 48, completing the wavelet transform.
In this wavelet transform apparatus, among the four subbands HH, HL, LH, and LL obtained by the apparatus shown in FIG. 46, the LL subband is further subjected to frequency division to obtain ten subbands.
The prior art coding methods mentioned above have the following drawbacks.
That is, the dynamic change of quantized factors used in MPEG needs complicated process steps when real-timing coding is performed. Further, when the code amount per a unit time is fixed, control of the code amount becomes difficult with a reduction in the unit time.
Further, in the progressive coding in JPEG, it is necessary to transform all blocks by DCT. When the hierarchical progression is employed, back-up memories for storing low resolution image and DSFs are required, and the number of times of DCT operation increases.
Generally, a color image is divided into a plurality of color components, and coding is performed for each color component. Therefore, in order to realize hierarchical reproduction, a plurality of color components must be coded at the same time, or the codes produced for each color component must be rearranged, so that a considerable memory is required.
Furthermore, the wavelet transform apparatus employed for the prior art coding method and apparatus has two problem mentioned hereinafter, because it is realized according to the basis system.
That is, in the process of storing the transformed data after the frequency-band division using the HPFs and LPFs, since the amounts of data I1 to I10 are different from each other, the sequence control is complicated when the entire apparatus is realized by a hardware, and the scale of the hardware is increased.
Another problem resides in the processing speed required for converting the wavelet-transformed data F1 to F9 and I10 into an image signal as shown in FIG. 48. When real-time wavelet transform is performed, to transform the dispersed subband data, F1 to F9, hampers the processing speed.
Furthermore, it is supposed from the problems mentioned above that conventional wavelet reverse transform has similar problems.