In order to more efficiently broadcast or record audio signals, it may be advantageous to reduce the amount of information required to represent the audio signals. Typically, this information may be stored as pulse code modulations (PCM) samples. In the case of digital audio signals, stored as PCM samples, the amount of digital information needed to accurately reproduce the original pulse code modulation (PCM) samples may be reduced by applying a digital compression algorithm, resulting in a digitally compressed representation of the original signal. (The term "compression" used in this context means the compression of the amount of digital information which must be stored or recorded.)
The goal of the digital compression algorithm is to produce a digital representation of an audio signal which, when decoded and reproduced, sounds the same as the original signal, while using a minimum of digital information (bit-rate) for the compressed (or encoded) representation. The ATSC digital television standard and the digital video disk (DVD) video standard call for audio compression using AC-3 which was developed by DOLBY LABORATORIES. AC-3, a standard digital compression technique, can encode from 1 to 5.1 channels of source audio from a PCM representation into a serial bit stream at data rates ranging from 32 kbps to 640 kbps. What is meant by 5.1 is five discrete channels plus the 0.1 channel which is a fractional bandwidth channel intended to convey only low frequency (subwoofer) signals.
A typical application of this algorithm is shown in FIG. 1. In this example, a 5.1 channel audio program is converted from a PCM representation requiring more than 5 Mbps (6 channels.times.48 kHz.times.18 bits=5.184 Mbps) into a 384 kbps serial bit stream by the AC-3 encoder 12. Transmission equipment 14 converts this bit stream to a radio frequency (RF) transmission which is directed to a transponder 16. The amount of bandwidth and power required by the transmission has been reduced by more than a factor of 13 by the AC-3 digital compression. The signal received from the satellite 15 is demodulated back into the 384 kbps serial bit stream by reception equipment 17, and decoded by the AC-3 decoder 18. The result is the original 5.1 channel audio program.
Digital compression of audio is useful wherever there is an economic benefit to be obtained by reducing the amount of digital information required to represent the audio. Typical applications are in satellite or terrestrial audio broadcasting, delivery of audio over metallic or optical cables, or storage of audio on magnetic, optical, semiconductor, or other storage media.
Referring to FIG. 2, the important features of the decoder are shown in block 20. The encoded bit stream is checked for errors and is deformatted to provide various types of data such as the encoded spectral envelope and the quantized mantissas. The bit allocation routine 22 is run and the results used to unpack and dequantize the mantissas. The spectral envelope is decoded via block 26 to produce the exponents. The exponents and mantissas are transformed back into the time domain via synthesis filter block 208 to produce the decoded PCM time samples.
Prior to transforming the audio signal from time to frequency domain, the encoder performs an analysis of the spectral and/or temporal nature of the input signal and selects the appropriate block length. This analysis occurs in the encoder only, and therefore can be upgraded and improved without altering the existing base of decoders. In this embodiment, a one bit code per channel per transform block is embedded in the bit stream which conveys length information. The decoder uses this information to deformat the bit stream, reconstruct the mantissa data, and apply the appropriate inverse transform equations. These inverse transform equations are computationally expensive and represent a significant portion of the operation on the bit stream.
A specification for the AC-3 algorithm, referred to as the ATSC specification A/52, is a published technical description of the AC-3 algorithm. The method for performing the inverse transform or inverse discrete cosine transform (IDCT) in the AC-3 algorithm is designed to work efficiently in hardware such as DSP devices, and is extremely inefficient for software decoder implementations. By strictly following the above-identified specification, the software implementation can require as much as 7,400,000 processor instructions per inverse transform. This large number of instructions requires a significant software overhead to complete. Accordingly, a method and system for significantly reducing the number of instructions for providing an inverse transform is desired.
Accordingly, what is needed is a method and system for decoding compressed audio signals in a software implementation. More particularly, what is needed is a system and method for reducing the number of software instructions required for providing an inverse transform for bit stream decoding. The present invention addresses such a need.