The enormous progress in microelectronics now allows comprehensive analog and digital signal processing even in the smallest space. The availability of analog and digital signal processors with minimal spatial dimensions has also smoothed the path in recent years to allow their use in hearing devices, obviously an area of use in which the system size is significantly restricted.
A simple amplification of an input signal by a microphone often leads for the user to an unsatisfactory hearing aid, since noise signals are also amplified and the benefit for the user is restricted to specific acoustic situations. Digital signal processors have been built into hearing aids for a number of years now, said processors digitally processing the signal of one or more microphones in order for example to explicitly suppress interference noise.
The implementation of Blind Source Separation (BSS) is known in hearing aids to assign components of an input signal to different sources and to generate corresponding individual signals. For example a BSS system can split up the input signal of two microphones into two individual signals, of which one can then be selected and then be output to a user of the hearing aid, under some circumstances after an amplification or after further processing, via a loudspeaker.
Another known method is to undertake a classification of the actual acoustic situation, in which the input signals are analyzed and characterized in order to differentiate between different situations, which can be related to model situations of daily life. The situation established can then for example determine the selection of the individual signals which are provided to the user.
Thus for example in M. Büchler and N. Dillier, S. Allegro and S. Launer, Proc. DAGA, pages 282-283 (2000), a classification of an acoustic environment for hearing device applications is described in which on of the classification variable used is an averaged signal level.