Many modern hearing instruments include signal processing which allows the hearing instrument to amplify the sound arriving from one direction (typically from the front of the hearing instrument user), while attenuating the sound from other directions. A simple test to verify this functionality will present pure tones at various frequencies, from the front of the hearing instrument and from another direction, in two separate measurements.
This type of test will work well if the hearing instrument is working in a simple mode where the amplification is nearly independent of the type of signals presented to its microphone(s).
However, with the recent development of advanced hearing instruments, the signal processing functions in the hearing instrument may include adaptation to the received signal. Specifically, one type of algorithm may detect the presence or absence of speech in the microphone signal(s), and process the signal(s) in order to optimize speech perception for the hearing instrument user. Such an algorithm may classify pure tone signals as non-speech or noise and suppress the signals, leading to an incorrect measurement of the directionality characteristics.
Attempts to avoid the suppression of the directionality test signal have been described in the literature, e.g. by presenting simultaneous tones over a broad spectrum, some hearing instrument algorithms are more likely to detect the test signal as “speech” and thereby allow for a test of directionality.
Although this method may be effective in some situations, the trend towards more advanced speech processing algorithms in hearing instruments leads to a desire to use natural signals as stimuli.