I. Field of the Invention
The present invention relates to communication systems. More particularly, the present invention relates to a novel and improved adaptive filter wherein the number of consecutive samples over which an averaging procedure is performed to adjust the tap values of the adaptive filter varies in accordance with the noise content of the channel upon which the adaptive filter operates.
II. Description of the Related Art
Acoustic echo cancellers (AEC) are used in teleconferencing and hands-free telephony applications to eliminate acoustic feedback between a loudspeaker and a microphone. In a mobile telephone system where a vehicle occupant uses a hands-free telephone, acoustic echo cancellers are used in the mobile station to provide full-duplex communications.
For reference purposes, the driver is the near-end talker and the person at the other end of the connection is the far-end talker. The speech of the far-end talker is broadcast out of a loudspeaker in the mobile. If this speech is picked up by a microphone, the far-end talker hears an annoying echo of his or her own voice. An acoustic echo canceller identifies the unknown echo channel between the loudspeaker and microphone using an adaptive filter, generates a replica of the echo, and subtracts it from the microphone input to cancel the far-end talker echo.
A block diagram of a traditional acoustic echo canceller is shown below in FIG. 1, where the echo path has been shown using dashed lines. The far-end speech x(n) is broadcast out as loudspeaker output and passes through the unknown echo channel 6, which is illustrated as an element though its existence is simply an artifact of the near-end microphone's colocation with the near-end loudspeaker, to produce the echo signal y(n). The near-end microphone input receives the sum of the echo signal y(n), channel noise w(n) and near-end speech v(n), the summations of which are shown by means of summing elements 8 and 10, which are purely for illustrative purposes. The near-end received signal r(n) is a sum of the echo signal y(n), channel noise w(n) and near-end speech v(n).
When the far-end talker is the only one speaking, the filter coefficients represented by vector h(n) are adapted to track the impulse response of the unknown echo channel. Adaptation control element 2 receives the error or residual signal e(n) and the far-end speech x(n) and in response provides a tap correction signal to adaptive filter 4. The adaptive filter 4 receives the tap correction signal and corrects its filter tap values in accordance with the tap correction signal. In accordance with the adapted filter coefficients h(n) and the far-end speech signal x(n), adaptive filter 4 generates a replica of the echo, y(n) which is provided to subtraction element 12. Subtraction element 12 subtracts the estimated echo y(n) from near-end received signal r(n).
Typically, the algorithm used to update the filter tap coefficients that track the echo path response is the least-mean-square (LMS) adaptation algorithm. The filter order is denoted as N and the far-end speech vector x(n) is represented as: EQU x(n)=[x(n)x(n-1)x(n-2) . . . x(n-N+1)], (1)
and the filter-tap coefficient vector is represented as: EQU h(n)=[h.sub.0 (n)h.sub.1 (n)h.sub.2 (n) . . . h.sub.N-1 (n)].(2)
As each new sample r(n) is received, the algorithm computes the echo estimate y(n) based on its current filter taps: ##EQU1## This estimated echo signal y(n) is subtracted from the near-end received signal r(n), so that the echo residual is given by: EQU e(n)=r(n)-y(n). (4)
The adaptation algorithm is disabled when the near-end talker is speaking, so that the tap coefficient vector is then updated as: EQU h(n+1)=h(n)+.alpha.e(n)x(n) (5)
where .alpha. is the adaptation step size and the error signal in the absence of near-end speech v(n) and noise w(n) is given by: EQU e(n)=y(n)-y(n) (6)
The LMS algorithm derives its name from the fact that it attempts to minimize the mean of the squared error: EQU MSE(n)=E[e.sup.2 (n)]. (7)
The LMS algorithm is also called the "stochastic gradient" method because an approximation to the gradient of MSE(n) with respect to the tap vector is given by: ##EQU2## Since the gradient indicates the direction that the mean square error increases most rapidly, each tap update is steered in the opposite direction of the gradient by adapting the tap values in relation to the inverted gradient.
The main advantages of the LMS algorithm are that it requires less computation than other adaptive methods and its stability can be guaranteed by proper choice of step size. This algorithm executes on a sample-by-sample basis; that is, the tap vector h(n) is updated with each new sample.
An alternative implementation, called the block LMS algorithm, updates the tap vector using a block of L samples. As before, as each new sample r(n) is received, the algorithm computes the echo estimate y(n) and the echo residual signal e(n). However, instead of immediately updating the filter coefficients, the algorithm averages L consecutive instances of the negative gradient e(n)x(n) (the "2" in the gradient expression has been absorbed into the step size .alpha.) and updates the coefficient vector h(n) once per L-sample block. The block LMS algorithm can therefore be expressed as: ##EQU3##
Notice that if L=1, this equation reduces to the sample LMS presented in equation 5, so the sample LMS algorithm can be considered to be a degenerate case of the block LMS algorithm. The advantage of choosing a block size L greater than 1 is realized when noise is present. As shown in FIG. 1, any noise w(n) present in the echo channel is added to the echo signal y(n) and therefore appears in the error signal: EQU e(n)=y(n)+w(n)-y(n). (10)
Therefore, the direction of each sample gradient is perturbed by the noise, which causes the sample (L=1) LMS algorithm to have a longer convergence time and a larger asymptotic mean square error. However, by choosing L&gt;1, we average L consecutive instances of the gradient to achieve a more accurate estimate because the positive and negative noise samples will tend to cancel each other out during the averaging process.