I. Field
The present invention relates to digital signal processing. More specifically, the present invention relates to minimizing the length of an input address for variable length encoded data using a sub-optimal variable length code.
II. Description of the Related Art
In the field of transmission and reception of video signals such as projecting “films” or “movies”, various improvements are being made to image compression techniques. Many of the current and proposed video systems make use of digital encoding techniques. Digital encoding provides a robustness for the communications link which resists impairments such as multipath fading and jamming or signal interference, each of which could otherwise serious degrade image quality. Furthermore, digital techniques facilitate the use signal encryption techniques, which are found useful or even necessary for governmental and many newly developing commercial broadcast applications.
High definition video, such as that in digital cinema, is an area which benefits from improved image compression techniques. One compression technique capable of offering significant levels of compression while preserving the desired level of quality for video signals utilizes adaptively sized blocks and sub-blocks of encoded Discrete Cosine Transform (DCT) coefficient data. This technique will hereinafter be referred to as the Adaptive Block Size Differential Cosine Transform (ABSDCT) method.
Lossless compression refers to compression methods for which the original uncompressed data set can be recovered exactly from the compressed stream. Given an input set of symbols, a modeler generates an estimate of the probability distribution of the input symbols. The probability model is then used to map symbols into codewords. A well-known encoding technique to perform lossless compression is Huffman encoding. In Huffman encoding, modeling and symbol-to-codeword mapping functions are combined into a single process. First, the codewords or symbols are ordered according to their probabilities. For example, if there are N distinct symbols, s1, s2, . . . , sn and the probabilities of occurrence are p1, p2, . . . , pn, then the symbols are rearranged so that p1≧p2≧p3. . . ≧pn. Generally, the frequency of each occurrence of each symbol is known or estimated apriori. Then, a contraction process is applied to the symbols with the lowest probabilities. For example, given the two symbols sN-1 and sN, the two symbols are replaced by a hypothetical symbol, HN-1 that has a probability of occurrence pN-1+pN. Thus, the new set of symbols has N-1 members: s1, s2, . . . , sn-2, HN-1. The process is repeated until the final set has only one member. This recursive process may be viewed as the construction of a binary tree, since at each step two symbols are being merged.
Huffman decoding is accomplished in a variety of ways, each of which has distinct disadvantages. Look-up table based decoding yields a constant decoding symbol rate. The look-up table is constructed at the decoder. For example, if the longest code word is L bits, than a 2L entry look-up table is needed. From the compressed input bit stream, L bits are read into a buffer. The L bit word in the buffer is used as an address into the lookup table to obtain the corresponding symbol, say sK. Let the codeword length be lK. The first lK bits are discarded from the buffer and the next lK bits are input, so that the buffer again has L bits. This is repeated until all of the symbols have been decoded.
Although the Huffman coding technique described above performs remarkably well, compact hardware implementation of the technique may be difficult. An alternative technique that would make hardware implementation more efficient is desired. Further, because the number of bits of data is large, decoding may not occur in a single clock cycle. An apparatus and method that allows for compact hardware implementation arid code look-ups to occur in one clock cycle is provided by the invention in the manner described below.