Audio and video signals may be processed to obtain certain phase shifts among audio and video signals in different channels. The phase shifts may be exploited by multi-channel media systems to provide rich sound images that provide useful timing, depth, direction, and other localization cues to users so the users may realistically perceive sound sources in the sound images.
Existing filtering techniques may be used to generate phase shifts in audio and video signals in some (e.g., surround) channels relative to some other (e.g., front) channels. However, the existing filtering techniques may not generate phase shifts with a high degree of accuracy, especially in low and high audible frequencies, resulting in poor audio content rendering and/or low-quality sound images. The existing filtering techniques also may require complicated expensive logic, large memory consumption, and long computation time to perform phase shifts with accuracy. Resultant time delays under these techniques are often too large to be acceptable in certain media applications such as live broadcast. As a result, useful media processing features, even already installed in systems, typically have to be omitted or turned off to compensate for the large time delays required by these techniques. Complicated expensive logic required by these techniques may also limit their applications to only a relatively limited range of (e.g., high-end) computing devices.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, issues identified with respect to one or more approaches should not assume to have been recognized in any prior art on the basis of this section, unless otherwise indicated.