A normal human ear transmits sounds as shown in FIG. 1 through the outer ear 101 to the tympanic membrane 102 which moves the bones of the middle ear 103 that vibrate the oval window and round window openings of the cochlea 104. The cochlea 104 is a long narrow duct wound spirally about its axis for approximately two and a half turns. It includes an upper channel known as the scala vestibuli and a lower channel known as the scala tympani, which are connected by the cochlear duct. The cochlea 104 forms an upright spiraling cone with a center called the modiolar where the spiral ganglion cells of the acoustic nerve 113 reside. In response to received sounds transmitted by the middle ear 103, the fluid-filled cochlea 104 functions as a transducer to generate electric pulses which are transmitted to the cochlear nerve 113, and ultimately to the brain.
Hearing is impaired when there are problems in the ability to transduce external sounds into meaningful action potentials along the neural substrate of the cochlea 104. To improve impaired hearing, hearing prostheses have been developed. For example, when the impairment is related to operation of the middle ear 103, a conventional hearing aid may be used to provide acoustic-mechanical stimulation to the auditory system in the form of amplified sound. Or when the impairment is associated with the cochlea 104, a cochlear implant with an implanted electrode can electrically stimulate auditory nerve tissue with small currents delivered by multiple electrode contacts distributed along the electrode. Although the following discussion is specific to cochlear implants, some hearing impaired persons are better served when the stimulation electrode is implanted in other anatomical structures. Thus auditory implant systems include brainstem implants, middle brain implants, etc. each stimulating a specific auditory target in the hearing system.
FIG. 1 also shows some components of a typical cochlear implant system where an external microphone provides an audio signal input to an external implant processor 111 in which various signal processing schemes can be implemented. For example, signal processing approaches that are well-known in the field of cochlear implants include continuous interleaved sampling (CIS) digital signal processing, channel specific sampling sequences (CSSS) digital signal processing (as described in U.S. Pat. No. 6,348,070, incorporated herein by reference), spectral peak (SPEAK) digital signal processing, fine structure processing (FSP) and compressed analog (CA) signal processing.
The processed signal is then converted into a digital data format for transmission by external transmitter coil 107 into the implant stimulator 108. Besides receiving the processed audio information, the implant stimulator 108 also performs additional signal processing such as error correction, pulse formation, etc., and produces a stimulation pattern (based on the extracted audio information) that is sent through an electrode lead 109 to an implanted electrode array 110. Typically, this electrode array 110 includes multiple electrode contacts 112 on its surface that provide selective stimulation of the cochlea 104.
Binaural stimulation has long been used in hearing aids, but it has only recently become common in hearing implants such as cochlear implants (CI). For cochlear implants, binaural stimulation requires a bilateral implant system with two implanted electrode arrays, one in each ear. The incoming left and right side acoustic signals are similar to those in hearing aids and may simply be the output signals of microphones located in the vicinity of the left and right ear, respectively.
FIG. 2 shows various functional blocks in a typical bilateral cochlear implant signal processing system. Independently on each side—left and right—an input sensing microphone 201 senses environmental sounds and coverts them into representative electrical signals that form audio inputs to the system. FIG. 3 shows a typical example of a short time period of an input audio signal from an input sensing microphone 201. The input audio signal is fed through multiple band pass filters (BPFs) 202 that decompose the input audio signal into multiple spectral band pass signals as shown, for example, in FIG. 4. As shown in FIG. 5, each band pass signal 501 is thought of as having a fine structure component 502 and an envelope component 503 (typically derived by Hilbert transformation). The filtered envelope signal 504 oscillates around the zero reference axis line with a frequency that is related to the fundamental frequency FO of the band pass filter.
A non-linear dynamic processing module 203 dynamically adjusts the filter envelopes by adaptive processing such as with automatic gain control (AGC) and other dynamic signal processing adjustments. Envelope detectors 204 extract the slowly-varying band pass envelope components of the band pass signals, for example, by full-wave rectification and low pass filtering. Pulse timing module 205 modulates the envelope signals with the corresponding band pass carrier waveforms to produce stimulation pulse requests on which the mapping/pulse generation module 206 performs a non-linear (e.g., logarithmic) mapping to fit the patient's perceptual characteristics and produces electrode stimulation signals in the specific form of non-overlapping biphasic output pulses for each of the stimulation contacts (EL-1 to EL-n) of each electrode array implanted in each cochlea on the left and right sides reflecting the tonotopic neural response of the cochlea.
Bilateral cochlear implants provide the benefits of two-sided hearing which can allow a listener to localize sources of sound in the horizontal plane. That requires information from both ears such as interaural level differences (ILDs) and interaural time differences (ITDs). This is discussed further, for example, in Macpherson, E. A, and Middlebrooks, J. C., Listener Weighting Of Cues For Lateral Angle: The Duplex Theory Of Sound Localization Revisited, J. Acoust. Soc. Am. 111, 2219-3622, 2002, which is incorporated herein by reference. An ITD is a relative time shift between signals arriving at the left and right ear which is caused by different times for the signal to reach each ear when the source of sound is not within the median plane. An ILD is a similar difference in sound levels of signals entering the ears. Two-sided hearing also is known to make speech easier to understand in noise, and again the perception of ITD plays a pivotal role therein. This is explained more fully, for example, in Bronkhorst, A. W., and Plomp, R., The Effect Of Head-Induced Interaural Time And Level Differences On Speech Intelligibility In Noise, J. Acoust. Soc. Am. 83, 1508-1516, 1988, which is incorporated herein by reference.
In the perception of ITDs, two sources of ITD information can be perceived: ITD information from the signal envelope and ITD information from the signal fine structure. It has been found that the fine structure ITD information plays a more important role than the envelope ITD information for sound localization and for understanding of speech in noise. This has been shown, for example, in Wightman and Kistler, Factors Affecting The Relative Salience Of Sound Localization Cues in Binaural and Spatial Hearing in Real and Virtual Environments, edited by Gilkey, R. H., and Anderson, T. R., (Lawrence Erlbaum Associates, Mahwah, N.J., 1997); Smith et al., Chimaeric Sounds Reveal Dichotomies In Auditory Perception, in Nature 416, 87-90, 2002; Nie et al., Encoding Frequency Modulation To Improve Cochlear Implant Performance In Noise, IEEE Trans. Biomed. Eng. 52, 64-73, 2005; and Zeng et al., Speech Recognition With Amplitude And Frequency Modulations, Proc. Natl. Acad. Sci. 102, 2293-2298, 2005, all of which are incorporated herein by reference, 2005, all of which are incorporated herein by reference.
In older cochlear implant arrangements, the fine structure information was not used. Instead, the incoming sound was separated into a number of frequency bands, for each band the slowly-varying envelope was extracted, and this envelope information was used to modulate the amplitude of a high-frequency pulsatile carrier signal. In such conventional cochlear implants, the frequency and phase of the pulsatile carrier signal was simply dictated by the speech processor and not directly related to the fine structure of the incoming signal. Accordingly, with such known cochlear implants, only the envelope ITD information was available, and consequently, ITD perception was very limited.
More recent cochlear implant systems have been implemented in which the stimulation signals are comprised of stimulation pulses with a timing that is based on temporal events within the fine structure of the left and right side acoustic signals. For instance, such temporal events can be the peaks or zero crossings within the fine structure of the signal. Stimulation schemes for coding fine structure information have been described for example by U.S. Patent Publication 20040478675; U.S. Pat. No. 6,594,525; U.S. Patent Publication 2004136556; which are incorporated herein by reference, and in van Hoesel and Tyler, Speech Perception, Localization, And Lateralization With Bilateral Cochlear Implants, J. Acoust. Soc. Am. 113, 1617-1630, 2003; and Litvak et al., Auditory Nerve Fiber Responses To Electric Stimulation: Modulated And Unmodulated Pulse Trains, J. Acoust. Soc. Am. 110(1), 368-79, 2001, also incorporated herein by reference. With these improved stimulation strategies, the ITD perception should be increased as compared to stimulation strategies comprising envelope ITD information only. However, in comparative studies no improvement in sound localization or in the understanding of speech in noise environments has been found; See van Hoesel supra.
Hearing impaired listeners are also known to have difficulties with localizing sources of sound and understanding of speech in noisy environments. See for example, Colburn, S. et al. Binaural Directional Hearing-Impairments And Aids in W. Yost & G. Gourevitch (Eds.), Directional Hearing pp. 261-278, New York: Springer-Verlag, 1987; Durlach N. I. et al., Binaural Interaction Of Impaired Listeners. A Review Of Past Research in Audiology, 20(3):181-211, 1981; Gabriel K. J. et al. Frequency Dependence Of Binaural Performance In Listeners With Impaired Binaural Hearing, J Acoust Soc Am., January: 91(1):336-47, 1992; Hawkins D B, Wightman F L. (1980). Interaural time discrimination ability of listeners with sensorineural hearing loss. Audiology. 19, 495-507; Kinkel, M. et al., Binaurales Hören bei Normalhörenden und Schwerhörigen I. Meβmethoden und Meβergebnisse, Audiologische Akustik 6/91, 192-201, 1991; Koehnke, J. et al., Effects Of Reference Interaural Time And Intensity Differences On Binaural Performance In Listeners With Normal And Impaired Hearing, Ear and Hearing, 16, 331-353, 1995; and Smoski, W. J. and Trahiotis, C., Discrimination Of Interaural Temporal Disparities By Normal-Hearing Listeners And Listeners With High-Frequency Sensorineural Hearing Loss, J Acoust Soc Am. 79, 1541-7, 1986, all of which are incorporated herein by reference.