The ability to recognize and interpret the speech of another person is one of the most heavily relied upon functions provided by the human sense of hearing. But spoken communication typically occurs in adverse acoustic environments including ambient noise, interfering sounds, background chatter and competing voices. As such, the psychoacoustic isolation of a target voice from interference poses an obstacle to recognizing and interpreting the target voice. Multi-speaker situations are particularly challenging because voices generally have similar average characteristics. Nevertheless, recognizing and interpreting a target voice is a hearing task that unimpaired-hearing listeners are able to accomplish effectively, which allows unimpaired-hearing listeners to engage in spoken communication in highly adverse acoustic environments. In contrast, hearing-impaired listeners have more difficultly recognizing and interpreting a target voice even in low noise situations.
Previously available hearing aids typically utilize methods that improve sound quality in terms of the ease of listening (i.e., audibility) and listening comfort. However, the previously known signal enhancement processes utilized in hearing aids do not substantially improve speech intelligibility beyond that provided by mere amplification, especially in multi-speaker environments. One reason for this is that it is particularly difficult using previously known processes to electronically isolate one voice signal from competing voice signals because, as noted above, competing voices have similar average characteristics. Another reason is that previously known processes that improve sound quality often degrade speech intelligibility, because, even those processes that aim to improve the signal-to-noise ratio, often end up distorting the target speech signal. In turn, the degradation of speech intelligibility by previously available hearing aids exacerbates the difficulties hearing-impaired listeners have in recognizing and interpreting a target voice.