The signal processing in hearing aids aims at compensating hearing loss as well as improving speech intelligibility and sound quality. Digital hearing aids have a large number of parameters that define signal processing details. While hearing aid manufacturers define default values for a large number of parameters in an attempt to provide benefit to a majority of hearing impaired users, some of these values are not optimal for all users and all listening situations. The quest for optimal parameters often implies compromises, for example between listening comfort and speech intelligibility. Examples of parameters that can be personalized include:
the sound amplification as function of frequency, even after applying a fitting rule that converts the audiogram into a gain prescription, further individual adjustments are needed,
the value of the time constants of the level detector in the automatic gain control, e.g., according to the user's age with a preference for longer time constant with growing age,
the aggressiveness of the dynamic range compression, e.g., according to the need to hear soft sounds and to the sensitivity of the user to loud sounds,
settings of the directional microphone, e.g., for users that need help in work-related meeting situations.
In conventional hearing aids, after the fitting process has been completed, typically at most two parameters are accessible to the hearing impaired user: the volume control of the instrument and a switch that allows selecting a listening program. All other parameters need to be set during the fitting process. This is traditionally done by a hearing professional according to the user feedback. In this process, the hearing impaired user might be listening to some sample sounds and the expert can ask the user a few questions. In addition, the result of diagnostic measurements is helpful, especially at the beginning of the fitting process when a first fit is done. These diagnostic tools include the user's hearing thresholds or audiogram (taking the air-bone gap into account), the user's most comfortable sound output level, the user's discomfort threshold and the result of various speech intelligibility tests.
Similarly, implantable auditory systems have parameters that need to be adjusted to the bearer of the implant. Here the variation in sensitivity of the user is enhanced by side-effects of the process of implanting the device. Hence, the need for fitting—or self-fitting—is even greater. Self-fitting is an attractive alternative if the number of experts capable of fitting implantable auditory systems is limited, as may be the case e.g. in developing countries.
The approach of hearing aid fitting by an expert has some flaws: the fitting process is somewhat tedious, and it requires the user to go to a specialized place and have him listen to sometimes annoying sounds or answer questions about sound perception that lies in the past. This process is even harder when fitting young children, which might not have the discipline or the attention required to correctly carry out the complete fitting process.
Another flaw is that the fitting process is carried out in a quiet and controlled environment, which is not representative of real-world situations. Therefore, it can very well happen that the settings found by the fitting process work well in this quiet environment, but degrade quite significantly in the real life sound environment of the user.
The present invention is concerned with self-fitting, that is, techniques allowing a user to find the optimal parameters by himself, without the help from a trained expert/audiologist. Self-fitting needs to be an easy process that does not require any technical knowledge from the user. Most known approaches involve a simple graphical user interface with keyboard, mouse or touch input, on which a user is adjusting a small number of parameters while the user is listening to a predefined listening situation. Variations of this process include the comparisons of the result of two or more sets of parameters and recordings of listening situations from the acoustic environment of the hearing impaired user.
Application US2011/044483 relates to specialized gesture sensing for fitting hearing aids. It aims to overcome the need to use standard keyboard and mouse input devices in the fitting process. The approach allows some patient participation in the fitting process. The proposed solution employs devices that act on gestures the audiologist or patient can make. The gestures can for example be used to indicate which ear has a problem or to change the volume to louder/softer by holding the input device and tilting it up or down. However, the proposed self-fitting process is performed in a static way.
In U.S. Pat. No. 7,660,426 the hearing aid fitting is performed by means of a camera. However, this application is addressing the problem of a physical fit of the aid to the ear of the hearing impaired user.
Application US2011/202111 discloses an auditory prosthesis with a sound processing unit operable in a first mode in which the processing operation comprises at least one variable processing factor, which is adjustable by a user to a setting which causes the output signal of the sound processing unit to be adjusted according to the preference of the user for the characteristics of the current acoustic environment.
US2008/226089 relates to dynamic techniques for custom-fit ear hearing devices. The hearing device comprises motion and pressure sensors. The received sensor signals are analysed by a computer and based thereon a stress-and-motion map is created. A virtual hearing device model for optimal support and comfort is created based on the stress-and-motion map.
Conventionally, hearing aids are devices that are worn behind the ear, in the concha of the outer ear or in the ear canal. Recently, however, interest has arisen in an alternative approach to hearing aids based on consumer electronic devices such as a smartphones or portable music players. In this approach, the hearing loss compensation is realized in a consumer device and the sound is presented to the user by headphones or wirelessly though an earpiece. In WO2012/066149 such a personal communication device was already shown.