Field of the Invention
The present invention relates generally to hearing aids used by the hearing impaired and methods for acoustically fitting such hearing aids based on the person's specific hearing capabilities at various sound frequencies and amplitudes. More specifically, the present invention relates to an apparatus and method for configuring or programming hearing aids based on the person's perceived position of sources of sounds at various frequencies and amplitudes.
State of the Art
Individuals with at least some hearing capacity can determine sound direction. When both ears are involved in this localization process (binaural hearing), sound direction is perceived by differences in both sound amplitude (“interaural amplitude difference”) and slight timing variation (“interaural time difference”) as well as audio diffraction for the ear that is closer to the sound source versus the shadowed ear. The process of sound localization also includes head movement to give affirmation of where the sound is coming from.
For individuals with normal hearing, the ability to perceive sound direction helps us to focus on a single conversation within a crowded and noisy room. Normal hearing individuals are typically able to converse (i.e., able to get 50% of words and 95% of sentences correct) at −5 dB signal-to-noise (“S/N”) ratio in a noisy environment (e.g., 60 dB Sound Pressure Level (“dBSPL”)). Sound pressure is the local pressure deviation from the ambient (average, or equilibrium) pressure caused by a sound wave. The SI unit (International System of Units) for sound pressure is the pascal (symbol: Pa).
The instantaneous sound pressure is the deviation from the local ambient pressure p0 caused by a sound wave at a given location and given instant in time. The effective sound pressure is the root mean square of the instantaneous sound pressure over a given interval of time (or space).
The sound pressure deviation (instantaneous acoustic pressure) p is:
  p  =      F    A  where: F=force, A=area.
The entire pressure ptotal is:ptotal=p0+p where: p0=local ambient atmospheric (air) pressure, p=sound pressure deviation.
Sound pressure level (SPL) or sound level Lp is a logarithmic measure of the rms sound pressure of a sound relative to a reference value. It is measured in decibels (dB) above a standard reference level.
            L      p        =                  10        ⁢                                  ⁢                              log            10                    ⁡                      (                                          p                rms                2                                            p                ref                2                                      )                              =              20        ⁢                                  ⁢                              log            10                    ⁡                      (                                          p                rms                                            p                ref                                      )                          ⁢                                  ⁢        dB              ,where: pref is the reference sound pressure and prms is the rms sound pressure being measured. The commonly used reference sound pressure in air is pref=20 μPa (rms), which is usually considered the threshold of human hearing (roughly the sound of a mosquito flying 3 m away).
Research has shown that the ear can also detect a time difference (i.e., delay) of sound of as little as 30 microseconds. For a typical adult-sized head, the time lag for sound waves travelling from one side of the head to the other is approximately 0.6 milliseconds. “Head shadow” refers to the attenuation and diffraction of sounds as they travel from one side of the head to the other. High frequency sounds are more affected by head shadow because of the shorter wavelength. The head shadow effect can be as much as 15 dB at 4000 Hz.
The “just noticeable difference” in azimuth perception for most normal hearing listeners when the sound source is straight ahead is a mere 1 degree. At lower frequency pulsed tones (i.e., below 1000 Hz) individuals with normal hearing have superior azimuth perception. Direction perception below 1000 Hz is predominantly a function of the person's ability to detect the slight timing variation that occurs between the left and right ears as the sound waves travel to the more distant ear. For higher continuous tone frequencies (i.e., between 1000 and 4000 Hz) a normal person can more easily detect changes in intensity. Sound direction localization for higher frequencies (between 1000 and 4000 Hz) is dominated by perception of differences in sound amplitude due to both an individual's frequency sensitivity and head shadow.
When sound comes from “off-angle” or to the side, the sound at each ear also “sounds” different. Sound to the furthest ear has to diffract (bend) around the head. Not only does the sound wave attenuate and arrive slightly later, but it is also altered in terms of the balance of high and low frequencies it contains (i.e., spectral alteration). Sounds with short wavelength (i.e., high frequency) do not diffract as well, so the furthest ear hears less of the high frequencies contained in the sound. The listener's brain detects this difference in frequency content, and uses the detected difference to locate the source of the sound. Head shadow diffraction also produces an overall diffraction modulation (i.e., interference patterns) for the shadowed ear.
In human speech, spoken vowels generate primarily low frequency sounds and spoken consonants generate primarily high frequency sounds. For an example, when the word “choose” is spoken, the “ch” and “z” sounds are formed by escaping air past the tongue, roof of the mouth, teeth and lips and have a rich high frequency spectrum (i.e., above 1000 Hz). These sounds are referred to as unvoiced sounds (e.g. fricatives, plosives). The “oo” sound is a voiced sound created by air motion sympathetic vibrations with the vocal cords and resonance within the lung/throat/mouth/nasal cavity and is typically below 1000 Hz. Thus, for speech recognition in the presence of ambient noise when aided by directional hearing or localization, the human binaural auditory system requires information perceived by differences in sound amplitude, slight timing variation and diffraction effects.
Head movement is also an important component of sound direction localization. The existence of ongoing spatial re-calibration in the human auditory system and accuracy is steadily reacquired with changes over time. Additionally, interaural time and level differences are not the only means by which one identifies the location of a sound source. Head movement, which results in sound changes in frequency, intensity and timing between the left and right ears assist the auditory system to locate the sound source.
Typically, a person with hearing degradation in one or both ears can still perceive sound direction. The use of hearings aids, however, will often times diminish the ability to perceive sound direction. Improper hearing aid fitting can further diminish sound direction perception. This is an obvious disadvantage to the use of hearing aids. Most often, others must raise their voice when talking to someone with hearing aids to communicate in a noisy environment.
The inability to clearly understand speech in a noisy environment is the most frequently reported complaint of hearing-impaired people that use hearing aids. Moreover, as hearing loss progresses, individuals require greater and greater signal-to-noise ratios in order to understand speech. It has been universally accepted through digital signal processing research that signal processing alone will not improve the intelligibility of a signal in noise, especially in the case where the signal is one person talking and the noise is other people talking (i.e., “the cocktail party effect”). With currently available hearing aids, there is no way to communicate to the digital processor that the listener now wishes to turn his/her attention from one talker to another, thereby reversing the roles of signal and noise sources.
While significant advances have been made in the last decade in hearing aid technology to improve the ability to hear conversations in noisy environments, such advances were often the result of the elimination of certain defects in hearing aid processing, such as distortion, limited bandwidth, peaks in the frequency response and improper automatic gain control (“ACG”) action. Research conducted in the 1970's, before these defects were corrected, indicated that the wearer of hearing aids typically experienced an additional deficit of 5 to 10 dB above the unaided condition in the S/N required to understand speech. Normal hearing individuals wearing the same hearing aids also experienced a 5 to 10 dB deficit in the S/N required to carry on a conversation, indicating that the hearing aids were at fault.
As a result of diminished sound localization ability and the S/N levels for individuals wearing hearing aids in a noisy environment, most hearing aid wearers try to avoid such situations or end up removing the hearing aids in order to regain the ability to focus on a particular conversation, despite the subsequent loss of understanding in portions of what is often perceived as muffled conversation. In order for hearing impaired individuals to be able to hear discrete conversations in a noisy environment, an increase in S/N is required, even when no defects in the hearing aid processing exist. Those with mild hearing loss typically need about 2 to 3 dB greater S/N than those with normal hearing. Those with moderate hearing loss typically need 5 to 7 dB greater S/N, and those with severe hearing loss typically need a 9 to 12 dB increase in S/N.
One attempt in the art to improve S/N in hearing aids is through the use of directional microphones. Because directional microphones are subject to the effects of back and/or side lobes, a deficiency in such hearing aids is a result of these effects in directional microphone sound reception. As a result, a person wearing hearing aids with directional microphones, sometimes ends up primarily hearing the conversation behind him/her through the back lobe of the directional microphone.
Another deficiency in the current state of hearing aids related to improving speech recognition in noisy environments is directly related to the current fitting protocols used to acoustically fit hearing aids to a hearing impaired individual. The current fitting process often results in substantial loss of localization perception for the user by making only a fraction of the speech cues available. In addition, the maximum loudness discomfort levels of the patient are not measured or accounted for in current hearing aid fitting protocols.
The basic signal processing architecture set forth in U.S. Pat. No. 6,885,752 to Chabries, et al. is representative of most modern hearing aids and uses multi-band, multiplicative compression. The band-pass filters typically generate nine or more fixed channels with band-pass resolutions spaced at half octaves or less between 200 Hz and 12,000 Hz.
In each frequency band, non-linear amplification or gain (referred to as multiplicative compression) is applied to each channel individually. As set forth in U.S. Pat. No. 6,885,752, one factor in restoring hearing for individuals with hearing losses is to provide the appropriate gain. For each frequency band where hearing has deviated from normal, a different multiplicative compression is supplied to make the greatest use of the individual's remaining hearing sensation. The multi-band, multiplicative AGC adaptive compression approach used in U.S. Pat. No. 6,885,752 and most modern hearing aids has no explicit feedback or feed forward.
Assessment of hearing is the first step in the prescribing and acoustic fitting of a hearing aid. Accurate assessment of the individual's hearing function is important because all hearing aid prescriptive formulas depend on one or more sets of hearing diagnostic data. Well known methods of acoustically fitting a hearing aid to an individual begin with the measurement of the threshold of the individual's hearing by using a calibrated sound-stimulus-producing device and calibrated headphones. The measurement of the threshold of hearing takes place in an isolated sound room. It is usually a room where there is very little audible noise. The sound-stimulus-producing device and the calibrated headphones used in the testing are typically referred to as an “audiometer.”
Generally, the audiometer generates pure tones, warbled tones, swept tones or band-pass noise centered at various frequencies between 125 Hz and 12,000 Hz that are representative of the frequency bands or channels designed within the hearing aid. These tones are transmitted through the headphones of the audiometer to the individual being tested. The intensity or volume of each tone is varied until the individual can just barely detect the presence of the tone. For each tone, the intensity of the tone at which the individual can just barely detect the presence of the tone, is known as the individual's air conduction threshold of hearing. Although the threshold of hearing is only one element among several that characterizes an individual's hearing loss, it is the predominant measure traditionally used to acoustically fit a hearing compensation device.
The usable range of hearing (also called the dynamic range) is usually characterized along coordinates of frequency and sound pressure level and falls between an area bounded by the audibility curve (or threshold of hearing) and the Loudness Discomfort Level (LDL), which are sounds too loud to listen to in comfort or are unpleasant or are painful. Many hearing aid fitting protocols include the measurement of the Loudness Discomfort Level as a function of frequency.
Examples of more advanced protocols are set forth in U.S. Pat. Nos. 6,201,875 and 6,574,342. These patents disclose a system where curves are generated for a series of different loudness levels (or loudness contours) such as contours for: Uncomfortably Loud, Loud but OK, Comfortable but Slightly Loud, Comfortable, Comfortable but Slightly Soft, Soft, and Very Soft. Also disclosed is a system to dynamically change the non-linear gain for each frequency channel based on the family of curves.
Once the threshold of hearing in each frequency band has been determined, this threshold of hearing is used to estimate the amount of amplification, compression, and/or other adjustment that will be employed to compensate for the individual's loss of hearing. The implementation of the amplification, compression, and/or other adjustments and the hearing compensation achieved thereby depends upon the hearing compensation device being employed. There are various formulas known in the art which have been used to estimate the acoustic parameters based upon the observed threshold of hearing. These include industry hearing compensation device formulas known as NAL1, NAL2, and POGO. There are also various proprietary methods used by various hearing-aid manufacturers. Additionally, based upon the experience of the person performing the testing and the fitting of the hearing-aid to the individual, these various formulas may be adjusted. The appropriate gain calculated for each frequency channel may also include considerations and adjustments for the additional measurements of loudness discomfort level or other measured loudness contours. The appropriate gain calculated for each frequency channel then becomes the hearing compensation curve or look-up table data programmed into the hearing aid for each frequency channel. Programming the hearing aid memory for the hearing aid digital signal processor (DSP) may be done dynamically during the fitting process with devices such as the GN Otometrics HI-PRO programming interface so that changes to the hearing compensation curves or look-up table data may be evaluated immediately by the person being fitted and the audiologist.
Another condition associated with sensorineural hearing loss is loudness recruiting. Loudness recruitment is a condition that results in an abnormally-rapid increase in loudness perception with relatively small increases in sound levels above the hearing threshold of the hearing impaired person. Recruitment is a common characteristic of hearing loss that results from damage to the sensory cells of the cochlea, the most common type of sensory hearing loss. For example, a person with loudness recruitment may not be able to hear high frequency sounds below 50 dBSPL, but may find any sounds above 80 dBSPL uncomfortable and even distorted. For such a hearing impaired individual, recruitment can mean a collapse of loudness tolerance and the feeling of distortion of loud sounds.
Recruitment is always a by-product of a sensorineural hearing loss. Recruitment is usually due to a reduction in neural elements associated with the inner ear hair cells. This phenomenon occurs because at some decibel level, the normal hair cells adjacent to the damaged hair cells (corresponding to the frequency of a hearing loss) are “recruited.” At the decibel level at which these recruited hair cells are triggered, the perceived loudness quickly increases and often causes hearing discomfort.
A known test for recruitment is Alternate Binaural Loudness Balancing (ABLB). ABLB compares the relative loudness of a series of tones presented alternately to each ear. In practice the ABLB test is rarely performed by audiologist while fitting hearing aids because of the time it takes to perform such tests using current testing methods and devices and the lack of use of such test result data by current systems for hearing aid fitting.
As current methods of hearing aid fitting do not typically account for loudness recruitment, patients having such a condition are often fitted with hearing aids that become uncomfortable to wear because the dynamic range of hearing is so easily exceeded by the hearing aid. In such cases, the only option is for the patient to manually turn the volume down on the hearing aid, which universally reduces the amplification for all frequencies and across all sound levels.
The goal of any hearing aid is to amplify or otherwise process sounds so that they can be comfortably heard and understood. For larger degrees of hearing loss involving loudness recruitment where even everyday speech communication is difficult, amplification is required. Amplification that is sufficient to make sub-threshold sounds audible, however, will tend to make higher-level sounds uncomfortably loud. Often gain compression techniques are employed to compensate for this problem. It is commonly believed in the art; however, that even with the best methods of compression, it is inevitable that hearing-aid amplified sounds will be at least somewhat louder than they would be for a normal-hearing person for some input levels. In addition, because of the techniques employed in current hearing testing systems, it is also the case that the best amplification compression methods will not be properly configured for a given patent because the patient has not been properly tested in order to generate the correct gain curves for the hearing aids.
Current methods of hearing aid fitting employ the use of subjective listening methods and interpretation of the test results, which typically rely on verbal communications of sound perception relayed between the patient and an audiologist administering the test. Patients' ability to quantify perceived loudness of a tone also varies by individual, especially when current testing methods may supply tones to each ear at spaced apart intervals or between hearing tests that are often several seconds apart. As such, a major deficiency of most hearing aid fitting protocols is the inaccurate test results that are often attained that are used as the basis for a hearing aid fitting.
Indeed, when such verbal test methods are used, discrepancies of 10 dB or more are not uncommon and have been reported to be found in 36% of threshold of hearing measurements. A typical binaural fitting of digital hearing aids having 9 band-pass channels requires a minimum of 18 hearing threshold measurements. Based on the known error rate of 36%, it is the case that six or seven measurements in such a test are likely in error.
Another major disadvantage of measurements obtained using a traditional transducer is that results are not interchangeable with measurements taken with another transducer for a given individual.
Still another deficiency of current audiometers is found within the audiometer standards. (See Specification of Audiometers, ANSI-S3.6-1989, American 45 Standards National Institute, the entirety of which is incorporated by this reference). For example, in speech audiometry evaluation, the speech stimuli level is adjusted for one ear and speech noise level (or masking) is separately adjusted in the opposite ear. Bilateral, asymmetric hearing loss is far more prevalent than symmetrical loss. Asymmetric hearing loss requires different hearing compensation curves for each ear. Moreover, spectral group velocities can shift and distort based on frequency and amplitude weighting and amplification through the non-ideal hearing aid components (for example: damping via ferrofluids purposely designed into some receiver-speakers to reduce unintended oscillations).
Accordingly, it would be advantageous to provide hearing aids and a hearing aid fitting system and method that provide increased signal to noise ratios. It would be a further advantage to provide hearing aids and a hearing aid fitting system and method that eliminates the need for fitting by a trained audiologist. It would be another advantage to provide hearing aids and a hearing aid fitting system and method that significantly reduces the smearing of directional information. It would be a further advantage to provide hearing aids and a hearing aid fitting system that compensates for loudness recruitment. These and other advantages are provided by hearing aids and a hearing aid fitting system and method according to the present invention set forth hereinafter by incorporating head azimuth detection during the fitting process.