The development of digital encoding and decoding processes for audio and video data continues to have a significant effect on the delivery of entertainment content. Despite the increased capacity of memory devices and widely available data delivery at increasingly high bandwidths, there is continued pressure to minimize the amount of data to be stored and/or transmitted. Audio and video data are often delivered together, and the bandwidth for audio data is often constrained by the requirements of the video portion.
Accordingly, audio data are often encoded at high compression factors, sometimes at compression factors of 30:1 or higher. Because signal distortion increases with the amount of applied compression, trade-offs may be made between the fidelity of the decoded audio data and the efficiency of storing and/or transmitting the encoded data.
Moreover, it is desirable to reduce the complexity of the encoding and decoding algorithms. Encoding additional data regarding the encoding process can simplify the decoding process, but at the cost of storing and/or transmitting additional encoded data. Therefore, in parametric backward adaptive methods, the bit allocation data for each mantissa are not encoded. Instead, the decoder must re-compute the bit allocation data from other encoded information. Such methods allow less data to be encoded, but involve relatively greater complexity on the decoder side. Similarly, while lossy mantissa encoding processes allow significant data compression, some information about the original mantissa values is lost in the encoding process, particularly during the mantissa quantization process. Although existing audio encoding and decoding methods are generally satisfactory, improved methods would be desirable.