Hearing aids are customized for the user's specific type of hearing loss and are typically programmed to optimize each user's audible range and speech intelligibility. There are many different types of prescription models that may be used for this purpose (H. Dillon, Hearing Aids, Sydney: Boomerang Press 2001), the most common ones being based on hearing thresholds and discomfort levels. Each prescription method is based on a different set of assumptions and operates differently to find the optimum gain-frequency response of the device for a given user's hearing profile. In practice, the optimum gain response depends on many other factors such as the type of environment, the listening situation and the personal preferences of the user. The optimum adjustment of other components of the hearing aid, such as noise reduction algorithms and directional microphones, also depend on the environment, specific listening situation and user preferences. It is therefore not possible to optimize the listening experience for all environments using a fixed set of parameters for the hearing aid. It is widely agreed that a hearing aid that changes its algorithm or features for different environments would significantly increase the user's satisfaction (D. Fabry, and P. Stypulkowski, Evaluation of Fitting Procedures for Multiple-memory Programmable Hearing Aids.—paper presented at the annual meeting of the American Academy of Audiology, 1992). Currently this adaptability typically requires the user's interaction through the switching of listening modes.
It is presently known that classification systems and methods for hearing aids are based on a set of fixed acoustical situations (“classes”) that are described by the values of some features and detected by a classification unit. The detected classes 10, 11, and 12 are mapped to respective parameter settings 13, 14, and 15 in the hearing aid that may be also fixed (FIG. 1) or may be changed (“trained”) (FIG. 2 as shown at 16, 17, and 18 respectively) by the hearing aid user, (“trainable hearing aid”).
New hearing aids are now being developed with automatic environmental classification systems which are designed to automatically detect the current environment and adjust their parameters accordingly. This type of classification typically uses supervised learning with predefined classes that are used to guide the learning process. This is because environments can often be classified according to their nature (speech, noise, music, etc.). A drawback is that the classes must be specified a priori and may or may not be relevant to the particular user. Also there is little scope for adapting the system or class set after training or for different individuals.
EP-A-1 395 080 discloses a method for setting filters for audio processing (beam forming) wherein a clustering algorithm is used to distinguish acoustic scenarios (different noise situations). The acoustic scenario clustering unit monitors the acoustic scenario. As soon as they change and the acoustic scenario is detected, a learning phase is initiated and a new scenario is determined with the help of a clustering training (FIG. 8, reference numeral 57). The end result is a new scenario wherein the corresponding class replaces the previous one, i.e. deletion of a class.
EP-A-1 670 285 shows a method to adjust parameters of a transfer function of a hearing aid having a feature extractor and a classifier.
EP-A-1 404 152 discloses a hearing aid device that adapts itself to the hearing aid user by means of a continuous weighting function that passes through various data points which respectively represent individual weightings of predetermined acoustic situations. New classes are added but ones not used are not deleted.