The present invention relates to an audio system and method, for example an audio system and method for obtaining an audio output for a scene comprising at least one object and at least one sound source.
It is known that a computer game or other interactive application may be made up of multiple computer-generated objects in a computer-generated scene. The objects may be represented in 3D. For example, objects may be represented as polygonal meshes, which may also be referred to as a wire-frame representation.
A scene may also contain multiple sound sources. A given sound source may be associated with an object in the scene, or may be independent of the objects in the scene. Sound associated with a given sound source may comprise, for example, recorded sound and/or computer-generated sound.
A scene may further contain a position with respect to which sound is determined. The position with respect to which sound is determined may be referred to as the position of a listener. The position of the listener may be used to determine what a user (for example, a player of the game) hears.
The position of the listener may be attached to a virtual camera position. If the position of the listener is attached to a virtual camera position, the position from which the user sees the scene may be the same as the position from which the user hears sounds associated with the scene. However, while the user may only be able to see objects that are in front of the virtual camera, in some circumstances the user may be able to hear sound sources at any angle relative to the listener, including sound sources behind the listener.
3D audio may refer to a technique used to process audio in such a way that a sound may be positioned anywhere in 3D space. The positioning of sounds in 3D space may give a user the effect of being able to hear a sound over a pair of headphones, or from another source, as if it came from any direction (for example, above, below or behind). The positioning of sound in 3D space may be referred to as spatialisation. 3D audio may be used in applications such as games, virtual reality or augmented reality to enhance the realism of computer-generated sound effects supplied to the user.
One method of obtaining 3D audio may be binaural synthesis. Binaural synthesis may aim to process monaural sound (a single channel of sound) into binaural sound (a plurality of channels, for instance at least one channel for each ear, for example a channel for each headphone of a set of headphones) such that it appears to a listener that sounds originate from sources at different positions relative to the listener, including sounds above, below and behind the listener.
For example, binaural synthesis may be used to synthesise sound to be delivered to a pair of headphones, in which different signals are sent to the left and right ears of a user. To the user, a difference between the signal received through the user's left ear and the signal received by the user's right ear (for example, a relative time delay) may make it seem that the sound is coming from a particular position. The use of binaural synthesis to obtain audio signals for two headphones or other devices may be described as binaural rendering.
Effects of a user's body on sound received by the user may be simulated. For example, HRTF (head related transfer function) methods may be used to simulate the effect of a listener's head, pinnae and shoulders on sound received from a particular direction.
Various forms of spatialisation may be used. For example, audio signals may be processed to produce spatialisation over speakers or over surround sound, for example over a 5.1 surround sound system. The audio signals may be processed so that a user listening to the signals over speakers or over surround sound perceives the sound as coming from a particular position.
There exist systems in which sound propagation is calculated by ray tracing. See, for example, Carl Schissler and Dinesh Manocha, GSound: Interactive Sound Propagation for Games, AES 41st Conference: Audio for Games, 2011.