Some of the most productive interactions in the workplace occur when a small group of people get together at a blackboard or a whiteboard and actively participate in presenting and discussing ideas. However it is often difficult to support this style of interaction when participants are at different geographical locations, a situation that occurs more and more frequently as organizations become more geographically distributed. To date, conventional audio and video-conferencing systems are not well suited to this scenario. Effective collaboration relies on the ability of the parties to see each other and the shared collaboration surface, and to see where the others are looking and/or gesturing. Effective collaboration also relies on the ability to hear remotely located participants from the direction where they are rendered on the screen. Although conventional video-conferencing systems can use multi-user screen-sharing applications to provide a shared workspace, the audio typically does not correlate with the location of the participants presented on the display.
In recent years a number of audio processing techniques have been developed to provide a more realistic audio and video conferencing experience. For stereo spatial audio rendering, head-related transfer function (“HRTF”) based methods can be used, but participants have to wear headphones. Loudspeaker based methods typically address the cancellation of cross talk, such as the left ear hearing from the right loudspeaker and vice versa, which is limited to cases where both the loudspeakers are located far away from the participants (i.e., loudspeaker-to-loudspeaker distance is much less than the loudspeaker-to-user distance), but the audio output from the loudspeakers does not correlate with the location of the participants displayed.
Users and manufactures of multimedia-conferencing systems continue to seek improvements in the audio aspects of the conferencing experience.