An auditory display is a human-machine interface to provide information to a user by means of sounds. These are particularly suitable in applications where the user is not permitted or not able to look at a display. An example is a headphone-based navigation system which delivers audible navigation instructions. The instructions can appear to come from the appropriate physical location or direction, for example a commercial may appear to come from a particular shop. Such systems are suitable for assisting blind people.
Headphone systems are well known. In typical systems a pair of loudspeakers are mounted on a band so as to be worn with the loudspeakers adjacent to a user's ears. Closed headphone systems seek to reduce environmental noise by providing a closed enclosure around each user's ear, and are often used in noisy environments or in noise cancellation systems. Open headphone systems have no such enclosure. The term “headphone” is used in this application to include earphone systems where the loudspeakers are closely associated with the user's ears, for example mounted on or in the user's ears.
It has been proposed to use headphones to create virtual or synthesized acoustic environments. In the case where the sounds are virtualized so that listeners perceive them as coming from the real environment, the systems may be referred to as augmented reality audio (ARA) systems.
In systems creating such virtual or synthesized environments, the headphones do not simply reproduce the sound of a sound source, but create a synthesized environment, with for example reverberation, echoes and other features of natural environments. This can cause the user's perception of sound to be externalized, so the user perceives the sound in a natural way and does not perceive the sound to originate from within the user's head. Reverberation in particular is known to play a significant role in the externalization of virtual sound sources played back on headphones. Accurate rendering of the environment is particularly important in ARA systems where the acoustic properties of the real and virtual sources must be very similar.
A development of this concept is provided in Härmä et al, “Techniques and applications of wearable augmented reality audio”, presented at the AES 114th convention, Amsterdam, Mar. 22 to 25, 2003. This presents a useful overview of a number of options. In particular, the paper proposes generating an environment corresponding to the environment the user is actually present in. This can increase realism during playback.
However, there remains a need for convenient, practical portable systems that can deliver such an audio environment.
Further, such systems need data regarding the audio environment to be generated. The conventional way to obtain data about room acoustics is to play back a known signal on a loudspeaker and measure the received signal. The room impulse response is given by the deconvolution of the measured signal by the reference signal.
Attempts have been made to estimate the reverberation time from recorded data without generating a sound, but these are not particularly accurate and do not generate additional data such as the room impulse response.