Hearing loss characteristics are highly individual and hearing thresholds vary substantially from person to person. The hearing loss varies from frequency to frequency, which is reflected by the clinical audiogram. Depending on the type and severity of hearing loss (sensorineural, conductive or mixed; light, moderate, severe or profound), the sound processing features of the human ear are compromised in different ways and require different types of functional intervention, from simple amplification of incoming sound as in conductive hearing losses to more sophisticated sound processing and/or using non-acoustic transducers as in the case of profound sensorineural hearing losses. Classical hearing aids capture incoming acoustic signals, amplify them and output the signal through a loudspeaker placed in the external ear channel. In conductive and mixed hearing losses an alternative stimulation pathway through bone conduction or direct driving of the ossicular chain or the inner ear fluids can be applied and the classical hearing aids may be replaced by alternative bone conductive implants or middle ear implants. Bone conductive implants aids resemble conventional acoustic hearing aids, but transmit the sound signal through a vibrator to the skull of the hearing impaired user. Middle ear implants use mechanical transducers to directly stimulate the middle or the inner ear. In sensorineural hearing losses deficits in sound processing in the inner ear result in an altered perception of loudness and decreased frequency resolution. To compensate for the changes in loudness perception, for example less amplification is needed for high-level sounds than for low-level sounds. The core functionality of hearing aids in sensorineural hearing losses is thus (a) compensating for the sensitivity loss of the impaired human ear by providing the needed amount of amplification at each frequency and (b) compensating for loudness recruitment by means of a situation dependent amplification. In profound sensorineural hearing losses the only functional solution for the patients can be offered by cochlear implants (CI). Cochlear implants provide electric stimulation to the receptors and nerves in the human inner ear. In the signal processing chain of a cochlear implant, the signal that is picked up by the microphone is processed in a similar fashion as in a hearing aid. A second stage then transforms the optimized sound signal into an excitation pattern for the implanted stimulator. The implanted stimulator comprises electrical current sources that drive the electrodes, which are surgically implanted into the cochlea to directly stimulate the auditory nerve. Totally implantable cochlear implants (TICI) include an implantable microphone, a rechargeable battery and a speech processor with no visible components at the head of the hearing impaired user.
FIG. 1 represents a schematic diagram of a conventional digital hearing aid, comprising a microphone, an analogue-to-digital signal converter, a digital signal processor, a digital-to-analogue converter and a speaker. FIG. 1 also represents bone conductive implants and middle ear implants, except for the nature of the acoustic transducer. The solid arrows in FIG. 1 show the flow of the audio signal between the modules. FIG. 2 shows a scheme of a conventional cochlear implant with the external ear-worn unit, that is typically worn behind the ear, composed of a microphone, an analogue-to-digital signal converter, a digital signal processor doing hearing aid-like signal preprocessing and a modulator unit that creates the excitation patterns of the electrodes in the cochlear implant and prepares the signal for transcutaneous transmission by modulation. The audio signal flow in these devices is shown by the solid arrows. The figure also shows the parts implanted into the head of the hearing impaired. Here, the signal from the external unit is demodulated and the impaired inner ear is stimulated through the implanted electrodes, as indicated by the dashed arrow. The transportation through the skin of the user is wireless. Thetotally implantable cochlear implant is composed of all components shown in FIG. 2, except for the modulation and demodulation modules, which are not needed in a totally implanted device.
The core task of the signal processing of hearing aids and an important part in the signal pre-processing of other hearing support systems comprises frequency-equalization filtering and amplification, as well as automatic gain control to provide the appropriate amount of loudness perception in all listening situations. In addition to these core tasks, the signal processing can provide noise reduction, feedback reduction, sound quality enhancements, speech intelligibility enhancements, improved signal-to-noise ratio of sounds from specific directions (directional microphones, beam forming) and more.
Hearing aids and other hearing solutions need to adapt their amplification not only to the individual hearing loss of the patient, but they also need to be able to adapt the amount of amplification to the current sound environment. This is related to the phenomenon of loudness recruitment that is characteristic for sensorineural hearing losses. As a result of loudness recruitment, more amplification is required in soft listening situations and less amplification is required in loud listening situations. A slow adaptation of the amount of amplification to the sound environment, with time constants larger than 1 second, is called “automatic volume control”. This type of adaptation has the advantage to give the correct amount of amplification without distorting the signal: it results in a high sound quality. However, abrupt changes in the level of the input signal are not compensated for and can in same cases result in a painful sensation or in the loss of important information that follows a loud event. Examples of abrupt changes could be sudden loud sounds (door bang), but they also occur when listening to two people talking simultaneously and one of the two persons being closer than the other. The state-of-the-art approach to compensate for sudden changes in the input signal level is an “automatic gain control” system that uses short time constants. However, these fast changes of the signal amplitude cause a reduction of the audio quality.
Several drawbacks of prior art hearing aids and cochlear implants can be indicated. Hearing aids are often unattractive and associated with age and handicaps. (This social phenomenon is often referred to as ‘stigmatization’.) Even with the latest improvements of less visible devices, amongst the hearing impaired that both need and can afford hearing aids, the market penetration is around 25%. Another drawback of prior art technology is that due to the necessity of custom hardware and custom chip development, both hearing aids and cochlear implants are quite pricy. Furthermore, hearing aids require specialized experts for parameter adjustments (hearing aid fitting). This fitting is typically performed by trained professionals like audiologists or ENT (ear, nose and throat) doctors on a PC with dedicated fitting software, which is normally provided by the manufacturer of the corresponding devices. Specialized expert knowledge is indeed required to correctly adjust the parameters. This is even more true in the case of cochlear implants. The adjustments of the hearing professional can be based on objective audiological measurements, e.g. an audiogram, or on subjective feedback from the user. In the case of cochlear implants, the need for professional support and hearing training is especially extensive. Hearing impaired customers have to visit an audiological clinic for servicing of their device and even simple software adjustments require a personal visit.
Another drawback of prior art technology is that digital hearing aids and cochlear implants only allow a very limited number of manual adjustments by the hearing impaired person himself, the output volume control and in some cases the selection of one of a small number of pre-defined listening programs. Each of these programs comprises a set of parameters optimized for a specific listening situation. In some cases, hearing aids can be controlled by a physical remote control (a hand held device or a wrist watch with remote control functionality), but the number of parameters that can be changed by these remote controls is limited.
Another drawback of prior art hearing aids and cochlear implants is that solutions to connect these devices to consumer electronics (TV, stereo, MP3 player, mobile phones) are cumbersome and expensive. Furthermore, hearing aids do not have a connection to the Internet and their interaction with Personal Digital Assistant (PDA) devices and mobile phones is typically limited to the amplification of the voice signal during phone calls or the amplification of reproduced music. The software (firmware) that runs in hearing aids is normally not upgradable. For a small number of products, firmware updates may be available, but these updates are not done on a frequent basis and therefore, changes in the signal processing are in most cases limited to parameter-based changes that have been anticipated when the device was built.
The newest generation of state-of-the-art digital devices can allow for a simple communication between the devices at the left and right ear. However, this communication is limited to a low bit rate transfer of parameters, for example to synchronize parameters of the automatic gain control to avoid hampering with the spatial perception due to independent gains in the two instruments. More advanced approaches that require access to the audio signal from the microphones at the left and right ear are not feasible with current technology.
The upper limit of the frequency range of state-of-the-art hearing aids is typically 8 kHz. Also, only a small number of hearing aids allow for a very simple form of usage monitoring to determine the duration of hearing aid usage. Finally, hearing aids do not monitor hearing loss degradation, except when used as part of a hearing test, and hearing aids are limited in their ability to provide tinnitus masking above 8 kHz.
Several prior art documents have already tackled one or more of the above-mentioned problems. For example, US2009/074206 A1 is concerned with a method of enhancing sound for hearing impaired individuals. A portable assistive listening system is disclosed for enhancing sound, including a fully functional hearing aid and a separate handheld digital signal processing device. The device contains a programmable DSP, an ultra-wide band (UWB) transceiver for communication with the hearing aid and a user input device. By supplementing the audio processing functions of the hearing aid with a separate DSP device (containing more processing power, memory, . . . ) the usability and overall functionality of hearing devices can be enhanced. The proposed solution still requires a hearing aid to provide hearing loss compensation.
Application US2007/098115 relates to a wireless hearing aid system and method that incorporates a traditional wireless transceiver headset and additional directional microphones to permit extension of the headset as a hearing aid. The proposed solution contains a mode selector and programmable audio filter so that the headset can be programmed with a variety of hearing aid settings that may be downloaded via the Internet or tailored to the hearing impairment of the patient. No flexible means are available to easily adjust the signal processing parameters.
Also patent documents U.S. Pat. No. 6,944,474 and U.S. Pat. No. 7,529,545 mainly focus on hearing profiles. They propose a mobile phone including resources applying measures of an individual's hearing profile, a personal choice profile and induced hearing loss profile (which takes into account the environmental noise), separately or in combination, to build the basis of sound enhancement. The sound input in these documents is either a speech signal from a phone call, an audio signal that is being received through a wireless link to a computer or some multimedia content stored on the phone. While the sound environment is taken into account to optimize the perception of these sound sources, the sound environment itself is not the target signal. In contrast, the amplification is optimized in order to reduce the masking effect of the environmental sounds.
In application US2005/135644 a digital cell phone is described with built-in hearing aid functionality. The device comprises a digital signal processor and a hearing loss compensation module for processing digital data in accordance with a hearing loss compensation algorithm. The hearing loss compensation module can be implemented as a program executed by a microprocessor. The proposed solution also exploits the superior performance in terms of processing speed and memory of the digital cell phone as compared to a hearing aid. The wireless download capabilities of digital cell phones are said to provide flexibility to the control and implementation of hearing aid functions. In an embodiment the hearing compensation circuit provides level-dependent gains at frequencies where hearing loss is prominent. The incoming digitized signal is processed by a digital filter bank, whereby the received signals are split into different frequency bands. Each filter in the filter bank possesses an adequate amount of stop-band attenuation. Additionally, each filter exhibits a small time delay so that it does not interfere too much with normal speech perception (dispersion) and production. The use of a hierarchical, interpolated finite impulse response filter bank is proposed. The outputs of the filter bank serve as inputs to a non-linear gain table or compression module. The outputs of the gain table are added together in a summer circuit. A volume control circuit may be provided allowing interactive adjustment of the overall signal level. Note that in the proposed system the audio signal captured during a phone call is used as the main input.