Hearing aids are portable hearing devices provided to people with impaired hearing. In order to accommodate the numerous individual requirements, different designs of hearing aids are provided, such as, for example, behind-the-ear-hearing aids (BTEs) and in-the-ear-hearing aids (ITEs), for example concha-hearing aids. The hearing aids described by way of example are worn on the outer ear or in the auditory canal. In addition, also available on the market are bone conduction hearing aids, implantable or vibrotactile hearing aids. In such cases, the damaged hearing is stimulated either mechanically or electrically.
In principle, hearing aids have the following essential components: an input transducer, an amplifier and an output transducer. The input transducer is generally a sound pickup, for example a microphone, and/or an electromagnetic receiver, for example an induction coil. The output transducer is generally implemented as an electroacoustic transducer, for example a miniature loudspeaker, or as an electromechanical transducer, for example a bone conduction hearing aid. The amplifier is usually integrated in a signal processing unit. This basic structure is shown in FIG. 1 using the example of a behind-the-ear hearing aid. One or more microphones 2 to pick up the sound from the environment are integrated in a hearing aid housing 1 for wearing behind the ear. A signal processing unit 3, which is also integrated in the hearing aid housing 1 processes and amplifies the microphone signals. The output signal from the signal processing unit 3 is transmitted to a loud speaker or receiver 4, which issues an acoustic signal. The sound may optionally be transmitted via an acoustic tube, which is fixed in the auditory canal with an otoplastic, to the eardrum of the person wearing the device. The power supply for the hearing aid and in particular for the signal processing unit 3 is provided by a battery 5 which is also integrated in the hearing aid housing 1.
People with impaired hearing suffer massively from interference signals which superimpose the useful signal. Previous approaches for real arrangements (hearing-aid directional microphone on the head) for frequencies below 1.5 to 2 kHz reveal a restricted directional effect. In particular, it has been found to be not really feasible simultaneously to suppress signals from two directions.
Known from the post-published document DE 10 2004 052912 is a method for reducing interference powers in a directional microphone and a corresponding acoustic system. The method relates inter alia to a three-microphone arrangement. A differential directional microphone formed therefrom is adjusted so that two directional interference sources can be suppressed. In addition, the directional effect is selected so that the summation of interference powers (microphone noise and external interference sources) is minimized.
FIG. 1 is a schematic representation of a known second-order differential directional microphone of this kind. This is formed from three adaptive, first-order differential directional microphones DM1, DM2 and DM3. Three microphones M1, M2 and M3 receive a time-dependent acoustic signal s(t). In the first differential microphone DM1, a microphone noise signal n1(t) and/or n2(t) is added in each instance to the ideal microphone signals. The respective summation signals are digitized with an analog-digital converter A/D thereby resulting in microphone signals x1(k) and x2(k). The first order differential microphone DM1 subtracts the two microphone signals x1(k) and x2(k) in a crosswise fashion, as is known for directional microphones. During this, the signals are delayed in the corresponding paths with timing elements T and a difference signal is multiplied by an adaptation parameter a. The resulting signals are added to obtain a first intermediate signal z1(k).
The output signal from the third microphone M3 is also subject to interference from microphone noise n3(t) and the corresponding summation signal is digitally converted into a microphone output signal x3(k). The differential microphone DM2 processes the microphone signals x2(k) and x3(k) to form a second intermediate signal z2(k) and the first differential microphone DM1 processes the two signals x1(k) and x2(k) to form the intermediate signal z1(k). The adaptation in the second differential microphone DM2 is performed with the same adaptation parameter a as in the first differential microphone DM1. In the first directional microphone stage with the two differential microphones DM1 and DM2, therefore, only one signal weighting with the signal factor a takes place.
In a similar way, the intermediate signals z1(k) and z2(k) are processed in the differential microphone DM3 to produce an output signal y(k), with a signal weighting with the factor b taking place in this second stage. In order finally to obtain the output signal y(k), firstly an equalization in the useful signal direction is performed by an equalizer EQ0 with the transmission function
      H    ⁡          (      z      )        =            1              1        -                  2          ⁢                      z                          -              2                                      +                  z                      -            4                                .  Preferably, the equalization takes place in the 0° direction.
Therefore, according to the principle shown, with the second-order directional microphone, in the first stage, attenuation takes place in a first direction (defined by the parameter a) and in the second stage, attenuation takes place in a second direction (defined by the parameter b). As mentioned above, this second-order directional microphone only achieves a limited directional effect for frequencies below 1.5 to 2 kHz.
Known from publication EP 1 307 072 A2 is a method for operating a hearing aid in which disturbing acoustic effects caused by turn-on, turn-off or switching events are to be avoided. For this, a first operating condition in the hearing aid undergoes a sliding transition to a second operating condition. The sliding transition occurs by means of parallel signal processing in two signal paths, with one signal resulting from a first operating condition and one signal resulting from the second operating condition being added in alternate weighting.
Also known from the article by Meyer, J. et al, “A highly scalable spherical microphone array based on an orthonormal decomposition of the sound field, mh acoustics”, pages II-1781 to II-1784, IEEE 2002, is a two-stage beam former. For this, the input signal is first split into spatially orthonormal components. The components are then multiplied with certain coefficients in order to control the direction of the directional microphone.