Audio ducking is an effect commonly used in radio and music. Audio ducking is an effect where the level of one signal is reduced by the presence and strength of another signal called side-chain signal. Therefore side-chaining uses the dynamic level of another input to control the level of the signal. A typical application is to automatically lower the level of a musical signal when a voice-over starts, and to automatically bring the level up again when the voice-over stops. It can lead to a “pumping” or modulating effect. An example of the modulating effect occurs when a music signal is side-chained to a voice over. The voice over may begin and pause for a brief amount of time, then begin again etc. The side chaining results in the music level being reduced when the voice over starts, being raised during the voice over pause, being reduced again when the voice over starts again, etc.
Certain terms used herein will now be defined. Ducking is a leveling mechanism where the level of one signal is reduced by the presence/level of another signal. Look Ahead Time comprises a time constant that is used for “pre-fetching” and analyzing the incoming audio material. Peak data of an audio file is used to visualize the audio file at a certain zoom level to avoid a full scan of the audio file. In order to determine peak data of an audio file, the entire audio file is scanned and divided in sections of appropriate number of audio samples. A section is processed to calculate the peak values (minimum/maximum) within the section as well as the RMS value or other loudness measure units. DAW (digital audio workstation) is a software environment used to record, edit and mix audio files. Clips are arrange in a sequence and placed on tracks. The length of a clip can be different than the audio the clip contains. Furthermore the audio can have sections of silence.
In the hardware world, several mechanisms are available for reducing the level of a first audio signal automatically by “side chaining” a second audio signal into a leveling device. One example is a radio station where the first signal (music) is lowered by specified settings when a second signal (a station announcer) occurs. The disadvantage of this mechanism is that the second signal must be present to reduce the first signal. This can only be done in real time; therefore the signal reduction of first signal is almost always noticeable and more or less drastic. The leveling device detects the second signal and lowers the volume of the first signal in real time to a predetermined reduction amount (e.g., some 3 dB). A time behavior of the first signal reduction (referred to as an Attack) can rarely be set because in the moment the second signal occurs this second signal has the priority. The level reduction of the first signal has to happen quickly, if not immediately. This does not sound good in most cases. A similar situation takes place for the “release time” of the first signal in the moment the second signal (side-chain) is no longer present. The release time is static and not program pending, so usually a quick level change from the reduced to the original level happens. If the processor has a significant large look-ahead time, the attack times can be made large enough to smoothly fade out the music before the voice starts. However, look-ahead time increases the overall system latency.
One workaround is to manually lower the first signal shortly before the second signal starts and bring the level of the first signal up later. These actions over time are called fades. The fades “algorithmic” behavior (more linear, more logarithmic) is determined by the manual action of the audio engineer operating the mixing desk.
In a Digital Audio Workstation (DAW) environment, a Look Ahead Time can be set for the leveling device. This time is not program pending and often too short. Typically the human ear can distinguish between a “mechanical” fade and an “artistic/well intended” fade. It is usually not possible to choose between different fade curves (logarithmic/linear etc.).