Directional processing can be used to solve a multitude of audio signal processing problems. In hearing aid applications, for example directional processing can be used to reduce the environmental noise that originates from spatial directions different from the desired speech or sound, thereby improving the listening comfort and speech perception of the hearing aid user. In audio surveillance, voice-command and portable communication systems, directional processing can be used to enhance the reception of sound originating from a specific direction, thereby enabling these systems to focus on the desired sound. In other systems, directional processing can be used to reject interfering signal(s) originating from specific direction(s), while maintaining the perception of signal(s) originating from all other directions, thereby insulating the systems from the detrimental effect of interfering signal(s). Beamforming is the term used to describe a technique which uses a mathematical model to maximise the directionality of an input device. In such a technique filtering weights may be adjusted in real time or adapted to react to changes in the environment of either the user or the signal source, or both.
Traditionally, directional processing for audio signals has been implemented in the time-domain using Finite Impulse Response (FIR) filters and/or simple time-delay elements. For applications dealing with simple narrow band signals, these approaches are generally sufficient. To deal with complex broadband signals such as speech, however, these time-domain approaches generally provide poor performance unless significant extra resources, such as large microphone arrays, lengthy filters, complex post-filtering, and high processing power are committed to the application. Examples of these technologies are described in “Analysis of Noise Reduction and Dereverberation Techniques Based on Microphone Arrays with Postfiltering”, C. Marro, Y. Mahieux and K. U. Simmer, IEEE Trans. Speech and Audio Processing, vol. 6, no. 3. 1998, and in “A Microphone Array for Hearing Aids”. B. Widrow, IEEE Adaptive Systems for Signal Processing, Communications and Control Symposium, pp. 7-11, 2000.
In any directional processing algorithm, an array of two or more sensors is required. For audio directional processing, either omni-directional or directional microphones are used as the sensors. FIG. 1 shows a high-level block diagram of a general directional processing system. As seen in the figure, while there are two or more inputs 100, 105 to the system 110, there is generally only one output 120.
There are two common types of directional processing algorithms adaptive beamforming and fixed beamforming. In fixed beamforming, the spatial response—or beampattern—of the algorithm does not change with time, as opposed to a time varying beampattern in adaptive beamforming. A beampattern is a polar graph that illustrates the gain response of the beamforming system at a particular signal frequency over different directions of arrival. FIG. 2 shows an example of two different beampatterns in which signals from certain directions of arrival are attenuated (or enhanced) relative to signals from other directions. The first is the cardioid pattern 200, typical of some end-fire microphone arrays, and the other 205 is the beampattern typical of broad-side microphone arrays. FIG. 3 illustrates typical configurations for end-fire 300, 305, 310 and broadside 320, 325, 330 microphone arrays.
More recent Fast Fourier Transform (FFT)-based approaches attempt to improve upon the traditional time-domain approaches by implementing directional processing in the frequency-domain. However, many of these FFT-based approaches suffer from wide sub-bands that are highly overlapped, and therefore provide poor frequency resolution. They also require longer group delays and more processing power in computing the FFT.
Accordingly, there is a need to solve the problems noted above and also a need for an innovative approach to enhance and/or replace the current technologies.