The present invention relates to the use of acoustic echoes to achieve spatial orientation. More specifically, it deals with devices and techniques that utilize reflected acoustic signals to locate and identify objects. Such devices and techniques provide useful aids for blind individuals and for sighted individuals in dark environments.
The term “echolocation” refers to methods for using sound instead of light as a means of sensing the presence of objects and determining their location and distance from the observer. In nature, echolocation is used by several species—most notably bats and dolphins—as a means of orientation and in prey species identification in darkness. These animals emit high-pitched “clicks” from their mouths and then sense the returning echo. The direction that their head is pointed in when it emits an echoed click reveals the direction of an object in front of them, while the delay between the emitted click and the echo is proportional to their distance from the object.
The echo delay for reflected sound is very brief. Sound travels 340 meters-per-second (m/sec) in air and 1500 msec in water. So for a bat, the interval between the emitted click and the echo from an object 5 meters away is 29 milliseconds (ms), while for a dolphin the interval is only 7 ms. While the auditory organs and brains of bats and dolphins are equipped to distinguish sounds at these minute intervals, the human ear and brain cannot distinguish sounds at an interval of less than 100 ms. For that reason, echolocation systems designed for humans must involve some form of electronic processing of the echo delay to either “stretch” the delay interval or use an interference technique known as “heterodyning” to convert the echo signal into a series of beats. The stretching technique is used in the human echolocation systems taught by Kim et al., U.S. Pat. No. 4,761,770 and Jorgensen, U.S. Pat. Nos. 4,907,136 and 5,107,467. The heterodyning method is used in human echolocation systems disclosed by Kay, U.S. Pat. Nos. 3,366,922 and 4,292,678 and Hickling, U.S. Pat. No. 7,054,226.
An older echolocation system, taught by Krauth, U.S. Pat. No. 2,500,638, uses the echo delay interval to control the frequency of an audio oscillator, so that the user hears an audio signal that changes in pitch as the distance to the reflecting object varies.
A major disadvantage of all but one of these prior art human echolocation systems is that they give the user limited information relating only to the distance and location of an object. With the exception of the Hickling system, there is no capability of signaling the size or configuration of an object. And while the Hickling system has some capabilities with respect to distinguishing spatial features of an object, these capabilities come at the expense of a costly and delicate acoustic vector probe (AVP) and sophisticated digital processors.
Consequently, there is need, as yet unmet by the prior art, for a simple, inexpensive human echolocation system that can provide information not only as to the distance and direction of an object, but also its approximate size and dimensions. The simplest and most cost effective means of achieving this goal involves taking advantage of a property of sound that has thus far been overlooked by the prior art. The prior art devices all use only reflected sound as their source of object location—which is to say, they analyze only the sound that the object reflects. The present invention, however, derives information regarding the size and dimensions of an object based on analysis of the sound that the object does not reflect.
The scientific principle underlying the present invention is a property of sound known as “diffraction”. Sound waves can bend themselves around objects, provided that the wavelength of the sound is larger than the size of the object. This explains why low frequency (long wavelength) sounds travel much further than high frequency (short wavelength) sounds. The short wavelength sounds are reflected back from objects in their path, while the long wavelength sounds are able to diffract around the same objects and keep going. This is why distant thunder sounds like a dull thud (low frequency, long wavelength), but close thunder sounds like a crack (high frequency, short wavelength).
One reason that the prior art has been unable to take advantage of sound diffraction is that it has followed the model of bat echolocation rather than that of dolphin echolocation. Bats use echolocation primarily to locate and identify the insects on which they feed. In order to distinguish a tasty moth from an unsavory one, for example, the bat may need to detect moth features in the size range of a few millimeters (mm). To achieve echolocation resolution at that level of detail requires very high sound frequency, because lower frequency sounds will not reflect from such miniscule features, but will instead diffract around them. For example, to detect the distinctive antennae of its preferred moth prey, which may only be about 3 mm wide, the bat needs to emit its click at a frequency of about 100,000 cycles-per-second, or 100 kiloHertz (kHz).
The range of echolocation frequencies used by bats is actually between 14 kHz and 100 kHz, which corresponds to a size/wavelength range of about 24 mm down to about 3 mm. In fact, each species of bat has its own “signature” echolocation frequency, based on the dimensional characteristics of its choice insect prey. The disadvantage of this type of high resolution echolocation is that it's incapable of directly gauging the size and dimensions of larger objects. The bat's ability to orient itself with respect to objects larger than insects depends of its motion through the air. By flying around a larger object and rapidly taking numerous echolocation readings, the bat comes to know the size and extent of the object it's dealing with.
This is why echolocation systems based on the bat are inherently ill-suited to humans, and particularly to blind humans. It's not only that humans can't fly like bats, but their mobility is much more restricted, especially in the case of blind people. Even if humans could maneuver around objects rapidly enough to take the numerous echolocation readings needed to gauge the size of an everyday object, the human brain lacks the capacity of the bat brain to interpret this information without the aid of expensive data processing devices.
All of the prior art echolocation systems follow the bat model insofar as they deploy only ultrasound frequencies—that is, sound frequencies above the upper human audible limit of 20 kHz. But 20 kHz ultrasound will reflect off any object 17 mm or larger, so its echoes can reveal nothing about the dimensions of everyday objects.
The present invention, on the other hand, patterns itself on the echolocation system used by dolphins. Instead of emitting clicks in a single ultrasound frequency, the way bats do, dolphins emit a rapid series of clicks spanning a frequency range from audible to ultrasonic—typically 200 Hz to 150 kHz. These broad-band pulses enable a dolphin to determine the size of objects in the range from 7.5 m down to 10 mm.
When a dolphin is approaching another fish, for example, it may start by emitting low frequency clicks and then increase the frequency until it detects an echo from the target. If the dolphin begins to detect echoes at a frequency of 375 Hz, it knows that it's approaching a fish that's about 4 m long—perhaps a shark or other large predator—and the dolphin will swim away from danger. If, on the other hand, the dolphin begins to detect echoes at a frequency of 3 kHz, it knows it's approaching a fish that's about a 0.5 m long—small enough to provide the dolphin's next meal. The dolphin can then further investigate its potential prey by emitting higher frequency clicks. For instance, the dolphin might emit 10 kHz clicks toward the front of the prey to detect the tentacles of a cuttlefish, one of its favorite foods.
The present invention utilizes the dolphin echolocation model for its human echolocation system. As further described herein, the present invention is a device that emits toward a target object a series of sound bursts or pulses, beginning at a low frequency and progressing stepwise through higher frequencies. In determining the size of the target object, a process of elimination is employed based on frequencies that are not reflected from the object, but instead diffract around it. For example, if the pulse frequency has progressed up to 200 Hz with no detected echo, then the target object must be smaller than 1.7 m. On the other hand, if an echo is detected at the lowest audible frequency of 20 Hz, then the target object must be 17 m or larger.
Once the device has run through this process of elimination, the first echoed frequency will indicate the overall size of the target object. So, if the first echoed pulse is 340 Hz, then the target object will have a maximum dimension of 1 m. Knowing this, the user can proceed to use higher frequency pulses aimed at specific parts of the target object to explore its configuration in greater detail. For example, the 1 m target object may be a 40-inch flat panel television screen. By aiming higher frequency pulses toward the bottom of the screen, the user may hear echoed pulses of 1 kHz, indicating the presence of a 0.3 m column supporting the screen.
Therefore, the present invention has all of the capabilities of the prior echolocation art in terms of determining the location of a target object, while adding the additional capability of also determining the overall size and detailed configuration of the target object as well. The present invention also provides these capabilities without the need for expensive, bulky and delicate directional transponder arrays and/or data processing units. Hence, the present invention affords an economical, compact, durable and easy-to-use device for human echolocation.