Today's conventional hearing aids typically comprise a Digital Signal Processor (DSP) for processing of sound received by the hearing aid for compensation of the user's hearing loss. As is well known in the art, the processing of the DSP is controlled by a signal processing algorithm having various parameters for adjustment of the actual signal processing performed.
The flexibility of the DSP is often utilized to provide a plurality of different algorithms and/or a plurality of sets of parameters of a specific algorithm. For example, various algorithms may be provided for noise suppression, i.e. attenuation of undesired signals and amplification of desired signals. Desired signals are usually speech or music, and undesired signals can be background speech, restaurant clatter, music (when speech is the desired signal), traffic noise, etc.
The different algorithms and parameter sets are typically included to provide comfortable and intelligible reproduced sound quality in different sound environments, such as speech, babble speech, restaurant clatter, music, traffic noise, etc. Audio signals obtained from different sound environments may possess very different characteristics, e.g. average and maximum sound pressure levels (SPLs) and/or frequency content. Therefore, in a hearing aid with a DSP, each type of sound environment may be associated with a particular program wherein a particular setting of algorithm parameters of a signal processing algorithm provides processed sound of optimum signal quality in a specific sound environment. A set of such parameters may typically include parameters related to broadband gain, corner frequencies or slopes of frequency-selective filter algorithms and parameters controlling e.g. knee-points and compression ratios of Automatic Gain Control (AGC) algorithms.
Consequently, today's DSP based hearing aids are usually provided with a number of different programs, each program tailored to a particular sound environment class and/or particular user preferences. Signal processing characteristics of each of these programs is typically determined during an initial fitting session in a dispenser's office and programmed into the hearing aid by activating corresponding algorithms and algorithm parameters in a non-volatile memory area of the hearing aid and/or transmitting corresponding algorithms and algorithm parameters to the non-volatile memory area.
Some known hearing aids are capable of automatically classifying the user's sound environment into one of a number of relevant or typical everyday sound environment classes, such as speech, babble speech, restaurant clatter, music, traffic noise, etc.
Obtained classification results may be utilised in the hearing aid to automatically select signal processing characteristics of the hearing aid, e.g. to automatically switch to the most suitable algorithm for the environment in question. Such a hearing aid will be able to maintain optimum sound quality and/or speech intelligibility for the individual hearing aid user in various sound environments.
U.S. Pat. No. 5,687,241 discloses a multi-channel DSP based hearing aid that utilises continuous determination or calculation of one or several percentile values of input signal amplitude distributions to discriminate between speech and noise input signals. Gain values in each of a number of frequency channels are adjusted in response to detected levels of speech and noise.
However, Applicant determines that it may be desirable to provide a more subtle characterization of a sound environment than only discriminating between speech and noise. As an example, it may be desirable to switch between an omni-directional and a directional microphone preset program in dependence of, not just the level of background noise, but also on further signal characteristics of this background noise. In situations where the user of the hearing aid communicates with another individual in the presence of the background noise, it would be beneficial to be able to identify and classify the type of background noise. Omni-directional operation could be selected in the event that the noise being traffic noise to allow the user to clearly hear approaching traffic independent of its direction of arrival. If, on the other hand, the background noise was classified as being babble-noise, the directional listening program could be selected to allow the user to hear a target speech signal with improved signal-to-noise ratio (SNR) during a conversation.
Applying Hidden Markov Models for analysis and classification of the microphone signal may obtain a detailed characterisation of e.g. a microphone signal. Hidden Markov Models are capable of modelling stochastic and non-stationary signals in terms of both short and long time temporal variations. Hidden Markov Models have been applied in speech recognition as a tool for modelling statistical properties of speech signals. The article “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”, published in Proceedings of the IEEE, VOL 77, No. 2, February 1989 contains a comprehensive description of the application of Hidden Markov Models to problems in speech recognition.
WO 01/76321 discloses a hearing aid that provides automatic identification or classification of a sound environment by applying one or several predetermined Hidden Markov Models to process acoustic signals obtained from the listening environment. The hearing aid may utilise determined classification results to control parameter values of a signal processing algorithm or to control switching between different algorithms so as to optimally adapt the signal processing of the hearing aid to a given sound environment.
US 2004/0175008 discloses formation of a histogram from signals which are indicative of direction of arrival (DOA) of signals received at a hearing aid in order to control signal processing parameters of the hearing aid.
The formed histogram is classified and different control signals are generated in dependency of the result of such classifying.
The histogram function is classified according to at least one of the following aspects:    1) how is the angular location and/or its evolution of an acoustical source with respect to the hearing device and/or with respect to other sources,    2) what is the distance and/or its evolution of an acoustical source with respect to the device and/or with respect to other acoustical sources,    3) which is the significance of an acoustical source with respect to other acoustical sources, and    4) how is the angular movement of the device itself and thus of the individual with respect to the acoustical surrounding and thus to acoustical sources.
Classification of the sound environment into a number of environmental classes, such as speech, babble speech, restaurant clatter, music, traffic noise, etc., is not mentioned in US 2004/0175008.
Applicant determines that it may be desirable to provide an alternative method in a hearing aid of classifying the sound environment into a number of environmental classes, such as speech, babble speech, restaurant clatter, music, traffic noise, etc.
According to some embodiments, this and other objects are obtained by provision of a hearing aid comprising a microphone and an A/D converter for provision of a digital input signal in response to sound signals received at the respective microphone in a sound environment, a processor that is adapted to process the digital input signals in accordance with a predetermined signal processing algorithm to generate a processed output signal, and a sound environment detector for determination of the sound environment of the hearing aid based on the digital input signal and providing an output for selection of the signal processing algorithm generating the processed output signal, the sound environment detector including a feature extractor for determination of histogram values of the digital input signal in a plurality of frequency bands, an environment classifier adapted for classifying the sound environment into a number of environmental classes based on the determined histogram values from at least two frequency bands, and a parameter map for the provision of the output for selection of the signal processing algorithm, and a D/A converter and an output transducer for conversion of the respective processed sound signal to an acoustic output signal.
A histogram is a function that counts the number—ni—of observations that falls into various disjoint categories—i—known as bins. Thus, if N is the total number of observations and B is the total number of bins, the number of observations—ni—fulfils the following equation:
  N  =            ∑              i        =        1            B        ⁢                  n        i            .      
For example, the dynamic range of a signal may be divided into a number of bins usually of the same size, and the number of signal samples falling within each bin may be counted thereby forming the histogram. The dynamic range may also be divided into a number of bins of the same size on a logarithmic scale. The number of samples within a specific bin is also termed a bin value or a histogram value or a histogram bin value. Further, the signal may be divided into a number of frequency bands and a histogram may be determined for each frequency band. Each frequency band may be numbered with a frequency band index also termed a frequency bin index. For example, the histogram bin values of a dB signal level histogram may be given by h(j,k) where j is the histogram dB level bin index and k is the frequency band index or frequency bin index. The frequency bins may range from 0 Hz-20 kHz, and the frequency bin size may be uneven and chosen in such a way that it approximates the Bark scale.
The feature extractor may not determine all histogram bin values h(j,k) of the histogram, but it may be sufficient to determine some of the histogram bin values. For example, it may be sufficient for the feature extractor to determine every second signal level bin value.
The signal level values may be stored on a suitable data storage device, such as a semiconductor memory in the hearing aid. The stored signal level values may be read from the data storage device and organized in selected bins and input to the classifier.
In accordance with some embodiments, a hearing aid includes a microphone and an A/D converter for provision of a digital input signal in response to a sound signal received at the microphone in a sound environment, a processor that is configured to process the digital input signal in accordance with a signal processing algorithm to generate a processed output signal, a sound environment detector for determination of the sound environment based at least in part on the digital input signal, and for providing an output for selection of the signal processing algorithm, the sound environment detector including (1) a feature extractor for determination of histogram values of the digital input signal in a plurality of frequency bands, (2) an environment classifier configured for classifying the sound environment into a number of environmental classes based at least in part on the determined histogram values from at least two of the plurality of frequency bands, and (3) a parameter map for the provision of the output for the selection of the signal processing algorithm, and a D/A converter and an output transducer for conversion of the processed output signal to an acoustic output signal.
In accordance with other embodiments, a hearing aid includes a sound environment detector for determination of a sound environment, the sound environment detector comprising a feature extractor for determination of histogram values of a digital input signal in a plurality of frequency bands, an environment classifier configured for classifying the sound environment into a number of environmental classes based at least in part on the histogram values from at least two of the plurality of frequency bands, and a parameter map for the provision of an output for the selection of a signal processing algorithm for a processor.