The prevalence of multichannel sound capture devices is ever increasing. For example, even casual users and typical consumers may now have access to sound capture devices that are configured to capture two or more channels of sound data, such as to support a stereo recording of a concert and so on. Through the use of multiple channels, a user listening to these channels may be given a feeling of depth and location of source sources that generated the recorded sounds such that the recording may give a user a feeling of “being there”.
Multichannel sound data may also be processed to support a variety of functionality. One example of this is to automatically determine a relative location of a sound source in the sound data. Thus, like the example above in which a user listening to the sound data may determine a relative position of a source so too may the sound data be processed by a computing device to determine such a position. However, conventional techniques that were utilized to perform this processing typically relied on orthogonality of the sources and thus may fail in certain instances, such as when the sources collide in one or more frequencies.