In the conventional multichannel audio signal coding art, many studies have been made on coding that uses correlation between stereo signals to compress the amount of information. In the case of coding five channel signals which may not be audio signals, one known method is to group channel signals in pairs, like stereo signals, to reduce them to coding of stereo signals. Compressive coding based on a difference signal or a fixed-weighted difference signal between the channels is also often used which exploits similarity of signals between channels of the original sounds. However, compressive coding techniques often provide low compression efficiencies. Examples of the techniques are disclosed in Non-patent literature 1 and Non-patent literature 2.
A conventional predictive 1-channel coding and decoding method will be described with reference to FIG. 1. As shown in FIG. 1A, at the coding end, a time-series digital signal provided through an input terminal 11 is divided by a frame divider 12 into short-time periods (called frames) each consisting of a predetermined number of samples, for example 1,024 samples. The digital signal is analyzed using linear prediction, frame by frame, to calculate prediction coefficients at a linear predictive analyzing section 13. The predictive coefficients are typically quantized by a quantizer 13a in the linear prediction analyzing section 13.
A linear predicting section 14 uses the quantized prediction coefficients and the digital signal in the frame as inputs to perform linear prediction on the digital signal in the time direction to obtain a predicted value of each sample. The linear prediction is autoregressive forward prediction. A subtractor 15 subtracts the predicted value from the corresponding sample of the input digital signal to generate a prediction error signal. The linear prediction section 14 and the subtractor 15 constitute a prediction error generating section 16.
The prediction error signal from the prediction error generating section 16 is entropy-coded using Huffman coding or arithmetic coding in a compressive coding section 17 and the result is outputted as an error code. The quantized prediction coefficients from the linear predictive analyzing section 13 are coded using entropy coding or vector quantization in a coefficient coding section 18 and the result is outputted as a coefficient code. The prediction coefficients may be scalar-quantized and outputted.
At the decoding end, as shown in FIG. 1B, an inputted compressed code is decoded in an expansion-decoding section 21 by using a decoding scheme corresponding to the coding scheme used by the compressive coding section 17 to generate a prediction error signal. An inputted coefficient code is decoded in a coefficient decoding section 22 using a decoding scheme corresponding to the coding scheme used by the coefficient coding section 18 to generate prediction coefficients. The decoded prediction error signal and prediction coefficients are inputted into a predictive synthesizing section 23, where they are predictive-synthesized to reproduce a digital signal. A frame combiner 24 sequentially combines frames of the digital signal and outputs them through an output terminal 25. In the predictive synthesizing section 23, the digital signal to be reproduced and the decoded prediction coefficients are inputted into a regressive linear prediction section 26, where a prediction value is generated, and the prediction value and the decoded prediction error signal are added together in an adder 27 to reproduce the digital signal.
A conventional method for coding a pair of stereo signals will be described with reference to FIG. 2 in which channels in a multichannel coding is reduced to coding of each pair of stereo signals. A first-channel digital signal xL(k) and a second-channel digital signal xR(k) in one frame are inputted into predictive coding sections 31L and 31R through input terminals 11L and 11R, respectively. A difference circuit 32 calculates the difference d(k)=xL(k)−xR(k) between the two signals. The difference signal d(k) is inputted into a predictive coding section 31D.
The predictive coding sections 31L, 31R, and 31D have the same configuration as that of the 1-channel predictive coding apparatus, for example as shown in FIG. 1A. Codes CSL, CSR, and CSD from the predictive coding sections 31L, 31R, and 31D are inputted into a code length comparator 33. The code length comparator 33 selects two codes with the minimum total code amount from among the pairs in the three codes and outputs them as codes for the first and second digital signals xL(k) and xR(k). Using the correlation between channels of digital signals in this way can reduce the amount of coding code.
A technique has been proposed that uses the correlation between two channel signals and generates and codes a weighted difference between the channel signals, thereby improving the efficiency of compression. An example of this technique is shown in FIG. 3. Prediction error generators 34L and 34R generate linear prediction error signals eL(k) and eR(k) from digital signals xL(k) and xR(k). The liner prediction error signals eL(k) and eR(k) are inputted into entropy coders 35L and 35R and also inputted into a weighted difference generator 36. While the linear prediction coefficients are also coded separately as in the example shown in FIG. 1A, only those parts related to the linear prediction errors are shown in FIG. 3. Supposing that a liner prediction error signal vector ER=(eR(0), eR(1), . . . , eR(K−1)) is a reference signal for a liner prediction error signal vector EL=(eL(0), eL(1), . . . , eL(K−1)), a weight calculating section 36a of a weighted difference generator 36 calculates a weighting factor β such that the energyENd=∥EL−βER∥2 of the weighted difference signal (vector) D=(d(0), d(1), . . . , d(K−1)) is minimized. Here, K denotes the number of samples of each signal in one frame, and β can be calculated as follows:β=ERTEL/ERTER where ERTEL is the inner product, which can be calculated according to the following equations.ERTEL=Σk=0K−1eR(k)eL(k)ERTER=Σk=0K−1eR(k)2 
The weighting factor calculated in the weight calculating section 36a is quantized in a factor quantizer 36d and the resulting weighting factor code q is outputted to a code length comparator 37. The quantized weighting factor is inverse-quantized in an factor inverse quantizer 36e and the linear prediction error signal eR(k) is multiplied by the resulting weighting factor β(q) at a multiplier 36b. The product is subtracted from the liner prediction error signal eL(k) in a subtractor 36c to generate a weighted difference signal d(k). The weighted difference signal d(k) is inputted into an entropy coder 35D. Codes CSL and CSD from the entropy coders 35L and 35D are inputted in the code length comparator 37 and one of the codes that has a smaller code amount is outputted. The output from the code length comparator 37 and the output from the entropy coder 35R are the coded outputs of the digital signals xL(k) and xR(k). The code length comparator 37 also codes the weighting factor β and adds it to the outputs. In this way, the signals can be compressed more efficiently than by the coding shown in FIG. 2.
Non-patent literature 1: “An introduction to Super Audio CD and DVD-Audio”, IEEE SIGNAL PROCESSING MAGAZINE, July 2003, pp. 71-82
Non-patent literature 2: M. Hans and R. W. Schafer, “Lossless Compression of Digital Audio”, IEEE Signal Processing Magazine, vol. 18, no. 4, pp. 21-32, 2001