1. Field of the Invention
The present invention relates to image compression, and more particularly, a method and an apparatus for encoding and decoding image data, which can enhance compression efficiency while hardly suffering from visual degradation of image quality
2. Description of Related Art
Conventionally, an image has been encoded through the processes of predicting the image in terms of time and space, encoding an RGB signal of the predicted image, converting/quantizing the encoded RGB signal, and generating bit streams for coefficients of the converted/quantized image. When encoding an image, predictive encoding is performed for each color component, i.e., R (red), G (green), and B (blue). The RGB color components in an image are considered separately, predictively encoded, and compressed.
After being separately encoded, the encoded RGB color components, i.e., RGB signal, are orthogonally transformed and quantized. The orthogonal transform and quantization is one kind of high-efficiency encoding method of image signals or voice signals. In the orthogonal transform and quantization, an input signal is divided into blocks of an appropriate size and each of the blocks is orthogonally transformed. The orthogonal transform and quantization is a method of compressing data by reducing the total number of bits. To reduce the total number of bits, different numbers of bits are assigned to the R, G and B components according to the size of power of the transformed R, G and B signal components, and then the R, G and B components are quantized.
According to the conventional art, image data input in line units is divided into two-dimensional blocks (for example, 4×4 blocks or 8×8 blocks), and then the two-dimensional blocks are encoded and decoded. Since the encoding and decoding of the image data are performed after two-dimensional blocks are formed, there are limitations on performing real-time encoding and decoding. For example, a 4×4 block can be formed only after inputs of four rows of the 4×4 block are received. When encoding the 4×4 block, three rows are stored in a buffer, and a first row is encoded when a fourth row is input. In this process, encoding time can be delayed. Conversely, when decoding the 4×4 block, an output of one row can be displayed only after all rows of the 4×4 block are decoded. Thus, the process of storing three rows in the buffer is required and time delay is inevitable.
In addition, when performing conventional spatial prediction using pixel values of blocks adjacent to a current block, the conventional spatial prediction is performed using pixel values of blocks on the left of the current block. Hence, it is impossible to perform real time spatial prediction and encoding. In other words, the spatial prediction of the current block using the pixel values of the blocks on the left of the current block can be performed using pixel values of restored blocks adjacent to the current block after performing the spatial prediction, conversion and quantization, inverse quantization and inverse conversion, and spatial prediction compensation on blocks adjacent to the current block. If the pixel values of the blocks on the left of the currents block are used, pipeline processing is not performed, thereby making it impossible to encode and decode image data in real time.
If the R, G and B components are separately encoded, redundant information of the RGB components is redundantly encoded, resulting in a decrease in encoding efficiency.
In this regard, the conventional encoding method reduces compression efficiency of an image and deteriorates image quality.