Current hearing aid technology deals with correcting the detrimental effects caused by the damaged inner and middle ear and the cochlea in particular. The main tool used in the various inventions is the non-linear amplification of the impaired sound frequencies.
However it is by now clear that the benefits of the multi-channel non-linear amplifications are limited in attaining the goal of speech “understanding”.
Lately, in many healthcare fields, it has been shown that taking advantage of the plasticity of the brain, many physical impairments may be alleviated, if not resolved.
The goal of this invention is to enable “to hear” the unheard or badly heard sound frequencies, by training the brain to connect the auditory channel with the visual channel and use stimulations of the eye in order to help the brain decipher language when the auditory channel by itself is at a loss, due to missing frequencies.
Various effects illustrate the auditory processing of the brain in order to optimize understanding of speech. For example the phenomenon known as the “missing fundamental” consists in the brain determining the fundamental sound frequency after hearing harmonic frequencies of the said fundamental frequency, and substituting the “missing fundamental” in trying to decode a word.
It has also be observed that the brain cannot distinguish between subsequent sounds “heard” within 3-4 msecs and just interprets the sum of the two as the signal heard.
The joint processing of information between the auditory cortex and the visual cortex is illustrated by what is known as the “McGurk illusion”, where a phoneme heard concurrently with a video of the mouth enunciating a different phoneme, is interpreted by the brain as the phoneme heard in the video. This illusion shows that there are pathways between the visual cortex and the auditory cortex where the two try to arrive at a common conclusion; in this case the phoneme heard accompanied by a picture of the mouth articulating it, trumps the phoneme that reached only the auditory cortex.
McGurk and MacDonald [Nature 264, 746-748], also showed that when the auditory and visual signals may, each, point to several possibilities, the brain will select the option commonly favored by both. For example the phonemes “ba” and “da” can be confused by the auditory cortex while the phonemes “ga” and “da” can be confused by the visual cortex. Thus when the phoneme “ba” is articulated and at the same time a video of the lips saying “ga” is shown, the brain will conclude that “da” was said, neither “ga” nor “ba”.
Meredith et al. report in the Proceedings of the national Academy of Sciences IPNAS 70 2011 108 (21) 8856-8861 “crossmodal reorganization in the early deaf, switches sensory but not behavioral roles of auditory cortex”, that:
“Recordings in the auditory field of the anterior ectosylvian sulcus of early-deafened adult cats, revealed robust responses to visual stimulation as well as receptive fields that collectively represented the contralateral visual field. They conclude that “These results demonstrate that crossmodal plasticity can substitute one sensory modality for another while maintaining the functional repertoire of the reorganized region”.
Laura Ann Petitto et al in Proceedings of the National Academy of Sciences PNAS 2000 97 (25) 13961-13966; “Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language” note that:
“For more than a century we have understood that our brain's left hemisphere is the primary site for processing language, yet why this is so has remained more elusive. Using positron emission tomography, we report cerebral blood flow activity in profoundly deaf signers processing specific aspects of sign language in key brain sites widely assumed to be unimodal speech or sound processing areas: the left inferior frontal cortex when signers produced meaningful signs, and the planum temporale bilaterally when they viewed signs or meaningless parts of signs (sign-phonetic and syllabic units). Contrary to prevailing wisdom, the planum temporale may not be exclusively dedicated to processing speech sounds, but may be specialized for processing more abstract properties essential to language that can engage multiple modalities. We hypothesize that the neural tissue involved in language processing may not be prespecified exclusively by sensory modality (such as sound) but may entail polymodal neural tissue that has evolved unique sensitivity to aspects of the patterning of natural language. Such neural specialization for aspects of language patterning appears to be neurally unmodifiable in so far as languages with radically different sensory modalities such as speech and sign are processed at similar brain sites, while, at the same time, the neural pathways for expressing and perceiving natural language appear to be neurally highly modifiable.
Renaud Boistel et al. in Proceeding of the National Academy of Sciences 10.1073 PNAS 1302218110 Sep. 3, 2013 note that: “Gardiner's Seychelle fog, one of the smallest terrestrial terapods, resolves an apparent paradox as these seemingly deaf fogs communicate effectively without a middle ear. Acoustic playback experiments conducted using conspecific calls in the natural habitat of the fogs provoked vocalizations of several males, suggesting that these frogs are indeed capable of hearing. This species thus uses extra-tympanic pathways for sound propagation to the inner ear. Our models show how bone conduction is enhanced by the resonating role of the mouth and may help these frogs hear”.
There is now extensive anatomical and physiological evidence from a range of species, that multisensory convergence occurs at the earliest levels of auditory cortical processing. Phased array ultrasound beams may be focused on a relatively small spot, thus delivering concentrated energy onto the desired locality in the brain. There is extensive evidence that irradiating damaged body organs, such as bone-fractures or missing teeth, with low intensity ultrasound, causes re-growth of the damaged parts. Y. Tufail et al in “Transcranial Pulsed Ultrasound Stimulates Intact Brain Circuits” report that “we found that ultrasound triggers TTX-sensitive neuronal activity in the absence of a rise in brain temperature (<0.01 C)” Low intensity pulsed ultrasound is known to help healing lacerated muscles and various soft tissues. Although the exact mechanism of healing is not known, it is probably linked to the amount of energy deposited in the cells that energizes certain processes. We therefore conjecture that sound energy of the right frequency and intensity deposited in the brain will enhance neuron activity in that spot. Specifically, energizing neurons in the auditory and visual corteces simultaneously may promote and strengthen existing coordinating processes.
There are testimonies of people that say that they “hear voices”. These testimonies indicate that the brain is able to generate internal sounds similar to the sounds originating through the auditory channel.
Our goal is to cause quasi-deaf people to “hear voices” generated mostly in the brain, by stimulating the brain “to put together” partial information received through the auditory channel with correlated information delivered through the visual channel and “GUESS” what was said.