1. Field
One or more embodiments relate to a coding and decoding method of an audio signal, and more particularly, to a lossless coding and decoding method.
2. Description of the Related Art
An encoding and decoding of an audio signal may generally be performed in a frequency domain. As a representative example, an Advanced Audio Coding (AAC) may be given. An AAC codec may perform a Modified Discrete Cosine Transformation (MDCT) for transforming into the frequency domain, and perform a frequency spectrum quantization using a masking degree of a signal in view of a psychological sound. A lossless compression scheme may be adopted in order to further compress a result of the performed quantization, and a Huffman coding may be used in the AAC. As the lossless compression scheme, a Bit-sliced Arithmetic Coding (BSAC) codec in which an arithmetic coding is applicable instead of the Huffman coding may be used.
An encoding and decoding of a speech signal may generally be performed in a time domain. A majority of speech codecs compressing in the time domain may be related with a code excited linear prediction (CELP). The CELP may be a speech encoding technology, and G. 729, an Adaptive Multi Rate-WideBand (AMR-WB), an internet Low Bitrate Codec (iLBC) an Enhanced Variable Rate Codec (EVRC), and the like which are extensively used may be CELP-based speech encoders. These coding schemes may be developed under an assumption that the speech signal is obtained using a linear prediction. In encoding a speech, a linear prediction coefficient and an excitation signal may be needed. In general, the linear prediction coefficient may be encoded using a line spectral pairs (LSP), and the excitation signal may be encoded using several codebooks. As examples of an encoding scheme developed based on the CELP, an algebraic code excited linear prediction (ACELP) encoding scheme, a Conjugate Structure code excited linear prediction (CS-CELP) encoding scheme, and the like may be given.
Due to a difference in a sensitivity between a low frequency band and a high frequency band in view of restrictions in a data rate and the psychological sound, the low frequency band may be sensitive to a fine structure of voice/music frequencies, and the high frequency band may be less sensitive to the fine structure. Thus, the lower frequency band may apply a greater number of bits to accurately encode the fine structure, and the high frequency band may apply a smaller number of bits to encode the fine structure. In this instance, the low frequency band may adopt an encoding scheme using the AAC codec, and the high frequency band may adopt an encoding scheme using energy information and adjustment information, which is referred to as a Spectral Band Replication (SBR) technology. The SBR may copy a low frequency signal in a Quadrature Mirror Filter (QMF) domain to generate a high frequency signal.
A scheme of reducing a number of used bits may be applicable even in a stereo signal. More specifically, a parameter indicating stereo information may be extracted after transforming the stereo signal into a mono signal, data obtained by compressing the stereo parameter and the mono signal may be transmitted, and the stereo signal may be decoded using the transmitted parameter in a decoder. As a scheme of compressing the stereo information, a Parametric Stereo (PS) technology may be used, and as a scheme of extracting a parameter of a multi-channel signal as well as the stereo signal and transmitting the extracted signal, a Moving Picture Experts Group (MPEG) surround technology may be used.
Also, more specifically taking an object of the above described lossless coding into account, the lossless coding may be performed when a quantization index of a quantized spectrum is assumed to be one symbol. Also, the lossless coding may be performed such that an index of the quantized spectrum is mapped on a bit plane to bundle bits.
In a case of performing a context-based lossless coding, it is possible to perform the lossless coding using information about a previous frame.