Virtual environments are ubiquitous in computing environments, finding use in video games (in which a virtual environment may represent a game world); maps (in which a virtual environment may represent terrain to be navigated); simulations (in which a virtual environment may simulate a real environment); digital storytelling (in which virtual characters may interact with each other in a virtual environment); and many other applications. Modern computer users are generally comfortable perceiving, and interacting with, virtual environments. However, users' experiences with virtual environments can be limited by the technology for presenting virtual environments. For example, conventional displays (e.g., 2D display screens) and audio systems (e.g., fixed speakers) may be unable to realize a virtual environment in ways that create a compelling, realistic, and immersive experience.
Virtual reality (“VR”), augmented reality (“AR”), mixed reality (“MR”), and related technologies (collectively, “XR”) share an ability to present, to a user of an XR system, sensory information corresponding to a virtual environment represented by data in a computer system. Such systems can offer a uniquely heightened sense of immersion and realism by combining virtual visual and audio cues with real sights and sounds. Accordingly, it can be desirable to present digital sounds to a user of an XR system in such a way that the sounds seem to be occurring—naturally, and consistently with the user's expectations of the sound—in the user's real environment. Generally speaking, users expect that virtual sounds will take on the acoustic properties of the real environment in which they are heard. For instance, a user of an XR system in a large concert hall will expect the virtual sounds of the XR system to have large, cavernous sonic qualities; conversely, a user in a small apartment will expect the sounds to be more dampened, close, and immediate. Additionally, users expect that virtual sounds will be presented without delays.
In order to meet these expectations, audio signals may need to be processed for accurate magnitude response control. One example mechanism used for audio signal processing is a proportional parametric equalizer (PPE). A PPE is capable of offering continuous control over parameters of an audio signal, and over the audio signal's frequency content. A PPE may be an efficient tool for accurate magnitude response control, within defined constraints. More specifically, a cascade of shelving filters can be used to create a multi-band (e.g., 3-band) parametric equalizer or tone control with minimal processing overhead. However, significant computing cycles and resources may be required to continually control such filters in an environment as dynamic as AR or dynamic spatialized audio capturing.
One way to determine the magnitude response of a prototype filter can be to apply the filter to a test signal and measure the output signal. Such approach may be prohibitive in terms of computing resources. Another way can be to pre-compute a filter's response and store it, e.g., in a lookup table. At run time, the data corresponding to a frequency of interest can be fetched from the storage. Although fetching information from storage may require very low computing costs, such costs add computational overhead every time new filter data is needed. Accordingly, magnitude response control to filter signals with increased efficiency is desired.