A normal ear transmits sounds as shown in FIG. 1 through the outer ear 101 to the tympanic membrane 102, which moves the bones of the middle ear 103 (malleus, incus, and stapes) that vibrate the oval window and round window openings of the cochlea 104. The cochlea 104 is a long narrow duct wound spirally about its axis for approximately two and a half turns. It includes an upper channel known as the scala vestibuli and a lower channel known as the scala tympani, which are connected by the cochlear duct. The cochlea 104 forms an upright spiraling cone with a center called the modiolar where the spiral ganglion cells of the acoustic nerve 113 reside. In response to received sounds transmitted by the middle ear 103, the fluid-filled cochlea 104 functions as a transducer to generate electric pulses which are transmitted to the cochlear nerve 113, and ultimately to the brain.
Hearing is impaired when there are problems in the ability to transduce external sounds into meaningful action potentials along the neural substrate of the cochlea 104. To improve impaired hearing, hearing prostheses have been developed. For example, when the impairment is related to operation of the middle ear 103, a conventional hearing aid may be used to provide mechanical stimulation to the auditory system in the form of amplified sound. Or when the impairment is associated with the cochlea 104, a cochlear implant with an implanted stimulation electrode can electrically stimulate auditory nerve tissue with small currents delivered by multiple electrode contacts distributed along the electrode.
FIG. 1 also shows some components of a typical cochlear implant system, including an external microphone that provides an audio signal input to an external signal processor 111 where various signal processing schemes can be implemented. The processed signal is then converted into a digital data format, such as a sequence of data frames, for transmission into the implant 108. Besides receiving the processed audio information, the implant 108 also performs additional signal processing such as error correction, pulse formation, etc., and produces a stimulation pattern (based on the extracted audio information) that is sent through an electrode lead 109 to an implanted electrode array 110.
Typically, the electrode array 110 includes multiple electrode contacts 112 on its surface that provide selective stimulation of the cochlea 104. Depending on context, the electrode contacts 112 are also referred to as electrode channels. In cochlear implants today, a relatively small number of electrode channels are each associated with relatively broad frequency bands, with each electrode contact 112 addressing a group of neurons with an electric stimulation pulse having a charge that is derived from the instantaneous amplitude of the signal envelope within that frequency band.
It is well-known in the field that electric stimulation at different locations within the cochlea produce different frequency percepts. The underlying mechanism in normal acoustic hearing is referred to as the tonotopic principle. In cochlear implant users, the tonotopic organization of the cochlea has been extensively investigated; for example, see Vermeire et al., Neural tonotopy in cochlear implants: An evaluation in unilateral cochlear implant patients with unilateral deafness and tinnitus, Hear Res, 245(1-2), 2008 Sep. 12 p. 98-106; and Schatzer et al., Electric-acoustic pitch comparisons in single-sided-deaf cochlear implant users: Frequency-place functions and rate pitch, Hear Res, 309, 2014 March, p. 26-35 (both of which are incorporated herein by reference in their entireties).
In some stimulation signal coding strategies, stimulation pulses are applied at a constant rate across all electrode channels, whereas in other coding strategies, stimulation pulses are applied at a channel-specific rate. Various specific signal processing schemes can be implemented to produce the electrical stimulation signals. Signal processing approaches that are well-known in the field of cochlear implants include continuous interleaved sampling (CIS), channel specific sampling sequences (CSSS) (as described in U.S. Pat. No. 6,348,070, incorporated herein by reference), spectral peak (SPEAK), and compressed analog (CA) processing.
In the CIS strategy, the signal processor only uses the band pass signal envelopes for further processing, i.e., they contain the entire stimulation information. For each electrode channel, the signal envelope is represented as a sequence of biphasic pulses at a constant repetition rate. A characteristic feature of CIS is that the stimulation rate is equal for all electrode channels and there is no relation to the center frequencies of the individual channels. It is intended that the pulse repetition rate is not a temporal cue for the patient (i.e., it should be sufficiently high so that the patient does not perceive tones with a frequency equal to the pulse repetition rate). The pulse repetition rate is usually chosen at greater than twice the bandwidth of the envelope signals (based on the Nyquist theorem).
In a CIS system, the stimulation pulses are applied in a strictly non-overlapping sequence. Thus, as a typical CIS-feature, only one electrode channel is active at a time and the overall stimulation rate is comparatively high. For example, assuming an overall stimulation rate of 18 kpps and a 12 channel filter bank, the stimulation rate per channel is 1.5 kpps. Such a stimulation rate per channel usually is sufficient for adequate temporal representation of the envelope signal. The maximum overall stimulation rate is limited by the minimum phase duration per pulse. The phase duration cannot be arbitrarily short because, the shorter the pulses, the higher the current amplitudes have to be to elicit action potentials in neurons, and current amplitudes are limited for various practical reasons. For an overall stimulation rate of 18 kpps, the phase duration is 27 μs, which is near the lower limit.
The Fine Structure Processing (FSP) strategy by Med-El uses CIS in higher frequency channels, and uses fine structure information present in the band pass signals in the lower frequency, more apical electrode channels. In the FSP electrode channels, the zero crossings of the band pass filtered time signals are tracked, and at each negative to positive zero crossing, a Channel Specific Sampling Sequence (CSSS) is started. Typically CSSS sequences are applied on up to 3 of the most apical electrode channels, covering the frequency range up to 200 or 330 Hz. The FSP arrangement is described further in Hochmair I, Nopp P, Jolly C, Schmidt M, Schöβer H, Garnham C, Anderson I, MED-EL Cochlear Implants: State of the Art and a Glimpse into the Future, Trends in Amplification, vol. 10, 201-219, 2006, which is incorporated herein by reference. The FS4 coding strategy differs from FSP in that up to 4 apical channels can have their fine structure information used. In FS4-p, stimulation pulse sequences can be delivered in parallel on any 2 of the 4 FSP electrode channels. With the FSP and FS4 coding strategies, the fine structure information is the instantaneous frequency information of a given electrode channel, which may provide users with an improved hearing sensation, better speech understanding and enhanced perceptual audio quality. See, e.g., U.S. Pat. No. 7,561,709; Lorens et al. “Fine structure processing improves speech perception as well as objective and subjective benefits in pediatric MED-EL COMBI 40+ users.” International journal of pediatric otorhinolaryngology 74.12 (2010): 1372-1378; and Vermeire et al., “Better speech recognition in noise with the fine structure processing coding strategy.” ORL 72.6 (2010): 305-311; all of which are incorporated herein by reference in their entireties.
Many cochlear implant coding strategies use what is referred to as an n-of-m approach where only some number n electrode channels with the greatest amplitude are stimulated in a given sampling time frame. If, for a given time frame, the amplitude of a specific electrode channel remains higher than the amplitudes of other channels, then that channel will be selected for the whole time frame. Subsequently, the number of electrode channels that are available for coding information is reduced by one, which results in a clustering of stimulation pulses. Thus, fewer electrode channels are available for coding important temporal and spectral properties of the sound signal such as speech onset.
In addition to the specific processing and coding approaches discussed above, different specific pulse stimulation modes are possible to deliver the stimulation pulses with specific electrodes—i.e. mono-polar, bi-polar, tri-polar, multi-polar, and phased-array stimulation. And there also are different stimulation pulse shapes—i.e. biphasic, symmetric triphasic, asymmetric triphasic pulses, or asymmetric pulse shapes. These various pulse stimulation modes and pulse shapes each provide different benefits; for example, higher tonotopic selectivity, smaller electrical thresholds, higher electric dynamic range, less unwanted side-effects such as facial nerve stimulation, etc.
The standard stimulation pulses in cochlear implants are biphasic. As shown in FIG. 2, such biphasic stimulation pulses have a negative half-wave (cathodic phase) and a charge-balanced positive half-wave (anodic phase). The net charge of a given half-wave pulse corresponds to the product of its current amplitude A and its pulse duration T. To ensure that no DC components are transmitted to the auditory nerve, the biphasic stimulation pulse includes an opposite phase half-wave pulse of equal duration and opposite amplitude to the first half-wave pulse. In specific pulsatile stimulation strategies, sequential or parallel pulses can be generated at different electrode contacts.
Sometimes, depending on the propagation of the electrical stimulation field and the specific anatomical situation, other nerves may be stimulated inadvertently. Such collateral stimulation can result in unintended somatic responses, which are side-effects such as twitching of the facial nerve and/or eye, or a burning sensation in the tongue or throat. These unpleasant somatic responses can increase in intensity with increasing charge. In some cases, this situation can prevent setting the stimulation intensity sufficiently high for effective hearing via the cochlear implant. If only one or a few electrode contacts are affected, these electrode contacts can be deactivated. But this change in the operation of the cochlear implant may have other undesirable consequences for the patient. If a considerable number of or all of the electrode contacts are affected, the cochlear implant may not be usable for hearing in extreme cases.
When setting the stimulation parameters in a patient fitting process, the fitting audiologist can try to change various stimulation parameters such as pulse width, stimulation rate and compression to provide a louder auditory sensation and reduce the undesired somatic responses. Re-implantation with a cochlear implant with a differently arranged reference electrode also has been attempted by placing separate ground electrodes at very specific locations. EP 0 959 943 mentions that facial nerve twitching can be an undesired somatic response. US 2012/0143284 also discusses the problem of undesirable facial nerve stimulation and other unwanted somatic responses. But throughout these documents this issue is always discussed in connection with extra-cochlear electrode contacts which are considered the source of these responses.
U.S. Pat. No. 5,601,617 describes selecting complex stimulus waveforms including triphasic stimulation pulses based on the “response” of the stimulated tissue. Generally the discussion assumes that this is the perceptive response and there is no mention of mitigating undesired somatic responses.