1. Field of the Invention
The present invention relates to a data coding apparatus, coding method, decoding apparatus and decoding method for computer graphics.
2. Description of the Related Art
In accordance with recent enhancement in computer graphics techniques, the processing rate of graphics processing hardware (GPU) has been greatly increased. In general, data for graphics processing is stored in a storage medium, such as a memory or hard disk, and input therefrom to the GPU. Reading data from the storage medium and writing data thereto require time.
Accordingly, even if the processing speed of the GPU is increased, the time required for data reading and writing is a bottleneck in computer graphics processing.
To overcome this problem, various attempts have been made to increase the throughput of data access by compressing the amount of data needed for data access such as data reading or writing. For instance, the amount of data is compressed by Huffman coding (see, for example, Michael Deering, “Geometry Compression”, International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH'95), pp. 13-20, 1995).
In entropy coding represented by Huffman coding, however, the amount of entropy in data is used as information for generating code data, therefore, it is necessary to simultaneously use code data and information concerning the entropy used to acquire the code data. For instance, in Huffman coding, the frequency of generation of data is used for coding in which the higher the frequency of generation of data, the shorter the created code. Accordingly, codes of high compression efficiency can be acquired, but a table (generally called a Huffman table), for example, which indicates the correspondence between appearance frequency information used for coding and the resultant codes, must be used during decoding.
Namely, in the conventional coding method, it is necessary to simultaneously refer, during decoding, to code data and to the above-mentioned additional information (e.g., the Huffman table) in one way or another. At this time, the above-mentioned problem of cost of referring to the additional information becomes apparent.
In general, additional information is transferred together with data to a decoder and used to decode code words generated by an encoder. As mentioned above, since code words are generated using the amount of entropy in data, the content of additional information varies between data items. Therefore, the decoder dynamically decodes code words based on additional information corresponding to the code words. At this time, if much time is spent in decoding, the performance of the decoder is inevitably degraded. Namely, such decoding offsets the effect of compression coding performed for increasing the throughput.
It is desirable to minimize the time necessary for decoding. However, in the conventional method, it is necessary to dynamically decode code words, which requires a significant computation cost.
To overcome this problem, a method for statically decoding code words has also been contrived. In this method, all information necessary for decoding code words is attempted to be stored in a decoder. This attempt is made in, for example, a coding scheme standardized in a facsimile. Specifically, modified Huffman coding is utilized. This method, however, raises another problem. Namely, since the decoder prestores code words, the encoder must perform coding only within the range of the stored code words. For example, when the decoder stores N code words (N is a natural number), the encoder must select therefrom the one that realizes the highest compression efficiency.
Modified Huffman coding will not raise a significant problem in facsimile data that has a particular tendency. However, it may well cause a significant compression efficiency reduction in computer graphics data that does not show any particular tendency. Further, when a decoder stores all code words, it requires a large storage space. However, it is not practical in view of cost to prepare a large memory space as a memory that can be accessed at an extremely high speed. Therefore, reduction of access speed will inevitably be caused, resulting in a great reduction in throughput.
As described above, the conventional methods contain various problems as factors for reducing the throughput or increasing the costs. In general, decoders are often realized by hardware, such as GPUs, dedicated to graphics processing. The above-mentioned problems will increase 1) the costs in the time and circuit structure required for the preparation of decoding, and 2) the costs in the time and circuit structure for decoding. Accordingly, those problems directly lead to the reduction of the throughput and the increase of the price of the hardware.
In these years, GPUs have been installed in, as well as high-function machines, low-price machines, such as pachinko tables, car navigation systems, mobile phones, portable game machines, and mobile terminals. The above-mentioned increases in cost are serious problems in realizing a GPU installed in such inexpensive machines.