1. Field of the Invention
The present invention relates to a decimator and a decimating method for digital signal processing (DSP), and more particularly to a decimator and a decimating method for multi-channel audio processing.
2. Description of the Related Art
FIG. 1(a) is a spectrum distribution diagram of a television multi-track stereo (MTS) audio specified by the US broadcast television systems committee (BTSC). The television MTS audio 10 is a composite signal, which includes a single-track (L+R) signal 101, a pilot signal 102, a stereo difference (L−R) signal 103, a second audio program (SAP) signal 104 and a professional channel signal 105.
The single-track (L+R) signal 101 is a base band signal, with a frequency of about 15 KHz. The frequency Fh of the pilot signal 102 is 15.734 KHz, which equals a horizontal scanning frequency of the BTSC video. The stereo difference (L−R) signal 103 is an amplitude modulation signal of the double sideband suppressed carrier (DSB_SC), with a central frequency of 2*Fh. The central frequency of the second audio program (SAP) signal 104 is 5*Fh, with the frequency spectrum ranging from +10 KHz to −10 KHz. The central frequency of the professional channel signal 105 is 6.5*Fh, with the frequency spectrum ranging from +3 KHz to −3 KHz.
FIG. 1(b) is a schematic block diagram of the circuit of the BTSC television multi-track stereo audio 10 for decimation. A stereo difference signal 103a is obtained after the television multi-track stereo (MTS) audio 10 is mixed and decimated for 2Fh through a frequency mixer 120. Since the second audio program (SAP) signal 104 employs frequency modulation (FM), the pilot signal 102 may not be sent together when the transmitting side transmits the second audio program (SAP) signal 104. Accordingly, the receiving side cannot perform coherent demodulation. Therefore, after the television MTS audio 10 is mixed and decimated for 5Fh through the frequency mixer 120, a second audio program in-phase (SAP_I) signal 104a and a second audio program quadrature phase (SAP_Q) signal 104b are respectively obtained and sent to a frequency discriminator 140 for FM demodulation.
The mixed and decimated stereo difference signal 103a, the second audio program in-phase (SAP_I) signal 104a and the second audio program quadrature phase (SAP_Q) signal 104b are mainly base band signals, but still having certain high-frequency signals derived from the mixing and decimating process.
The sampling frequencies of the single-track signal 101, the stereo difference signal 103a, the second audio program in-phase (SAP_I) signal 104a and the second audio program quadrature phase (SAP_Q) signal 104b are decimated through four decimators 135, 132, 133 and 134 (referring to FIG. 1(b)) during the digital signal processing so as to obtain the single-track signal 101b, the stereo difference signal 103ab, the second audio program in-phase (SAP_I) signal 104c and the second audio program quadrature phase (SAP_Q) signal 104d after decimation.
During the digital signal processing of the decimators 131, 132, 133 and 134, in order to reduce the sampling frequency and avoid the aliasing of the frequency spectrum, a finite impulse response (FIR) filter is employed to act as a low-pass filter for the frequency domain and reduce the sampling frequency for the time domain. Additionally, the high-frequency signals derived from the mixing and decimating process can be filtered through the low-pass filtering process of the FIR filter.
FIG. 1(c) is a schematic block diagram of the circuit of a second order FIR filter 160, which can be implemented to the previous stage of the decimators 131, 132, 133 and 134. The input signal 161 is converted into a first delay input signal 162 after being delayed by a time delayer 165. The first delay input signal 162 is converted into a second delay input signal 163 after being delayed by a time delayer 166. The signals 161, 162 and 163 are respectively multiplied with the corresponding impulse response coefficients 161h, 162h and 163h by multipliers 161m, 162m and 163m, the products are added together by an adder 167, and thereby the summation is an output signal 168.
The actual FIR filter generally requires an extremely large order. If the conventional register is used to act as a time delayer, the manufacturing cost of the hardware circuits including the four decimators (shown in FIG. 1(b)) is extremely high. Meanwhile, since the registers are serially connected with each other, when the FIR filter is operated, the transition of the logic level of the register has high frequency, based on the generation of the circuit clocks, resulting in heavy power consumption.
FIG. 1(d) is a spectrum distribution diagram of a television multi-track stereo audio 11 regulated by the Electronic Industries Association of Japan (EIA-J). The audio 11 includes a single-track (L+R) signal 111, a stereo difference (L−R) signal 113 or a second audio program (SAP) signal 114, and a pilot identification signal 115. The transmitting side of the television stereo audio system for the EIA-J does not simultaneously transmit both the stereo difference (L−R) signal 113 and the second audio program (SAP) signal 114. The receiving side obtains signal data according to the amplitude modulation performance of the pilot identification signal 115, and the transmitted signal is the stereo difference (L−R) signal 113 or the second audio program (SAP) signal 114.
FIG. 1(e) is a schematic block diagram of the circuit of the audio 11 for decimation. After the audio 11 received by the receiving side is decimated for 2Fh by a frequency mixer 121, either the single-track (L+R) signal and the stereo difference (L−R) signal 113 or the single-track (L+R) signal and the second audio program quadrature phase (SAP) signal 114 are obtained.
After the single-track (L+R) signal 111 is decimated by a decimator 151, a single-track (L+R) signal 111b is obtained. The stereo difference (L−R) signal 113 includes a stereo difference in-phase (L−R_I) signal 113a and a stereo difference quadrature phase (L−R_Q) signal 113b. After the signals 113a and 113b are decimated by the decimators 153 and 154 respectively, a stereo difference in-phase (L−R_I) signal 113c and a stereo difference quadrature phase (L−R_Q) signal 113d are obtained. Alternatively, the second audio program (SAP) signal 114 includes a second audio program in-phase (SAP_I) signal 114a and a second audio program quadrature phase (SAP_Q) signal 114b. After the signals 114a and 114b are decimated by the decimators 153 and 154, a second audio program in-phase (SAP_I) signal 114c and a second audio program quadrature phase (SAP_Q) signal 114d are obtained. The FM demodulation of the stereo difference in-phase (L−R_I) signal 113c and the second audio program in-phase (SAP_I) signal 114c share the same path, and the FM demodulation of the stereo difference quadrature phase (L−R_Q) signal 113d and the second audio program quadrature phase (SAP_Q) signal 114d also share the same path.
Compared with the single-track (L+R) signal 111b, the stereo difference in-phase (L−R_I) signal 113c and the stereo difference quadrature phase (L−R_Q) signal 113d need to be demodulated through a frequency discriminator 141, and thus a period of latency is necessary for such demodulation. Therefore, the single-track (L+R) signal 111 shall be transmitted later than the stereo difference (L−R) signal 113 by 20 microseconds at the transmitting side in compliance with the regulation of EIA-J so as to separate a left single-track signal from a right single-track signal. However, the processing time required is sometimes more than 20 microseconds for demodulating the stereo difference in-phase (L−R_I) signal 113c and the stereo difference quadrature phase (L−R_Q) signal 113d through the frequency discriminator 141, and consequently the latency for separating the left single-track signal from the single-track signal is not consistent.