A person may interact with a computing device with audio information in several different ways. In some examples, a person may provide a voice command to the computing device so that the computing device may take the appropriate action specified by the voice command. Also, the computing device may receive speech from a user and translate this speech to text. These types of audio signals may be considered audio signals for machine listening. For example, a speech-to-text converter and/or a voice command interpreter may receive the audio signals and process them in order to create text or machine instructions according to the audio signals.
In other examples, a person may provide audio information to the computing device for purposes of communicating with another person. For example, the user may engage in a telephone call, audio chat, video conference, etc. with another person. As such, the computing device may transmit an audio signal that captures the received speech through a network so that another person may listen to the audio signal. These types of audio signals may be considered audio signals for human listening.
However, before processing or transmitting the audio signal, the computing device may perform pre-processing on the audio signal to remove undesirable components of the audio signal such as noise, for example. Typical, pre-processing of audio signals may include noise reduction, noise suppression, echo removal, etc. However, algorithms used for pre-processing of the audio signal generally represent compromises between algorithms that are optimally tuned for processing audio signals for later human listening and algorithms that are tuned for machine listening, such that the algorithm is not optimized for either final use.