Conventionally, an audio mixer used for processing audio signals is equipped with a frequency processing section and an amplitude processing section. The frequency processing section includes, for example, filters which controls the frequency characteristics of the inputted audio signal. The amplitude processing section is generally referred to as a fader or a dynamics which changes the amplitudes of the inputted audio signal. Using these sections, a signal processing usually called an effect processing is performed for the frequency characteristics and amplitudes of the inputted audio signal.
When performing the effect processing to change the frequency characteristics, two frequency processing operations such as rejection of signal components by a filter and emphasis of signal components by an equalizer can be carried out in any order and the resulting frequency characteristics are the same.
If the audio signal inputted to the audio mixer has a signal level below the predetermined threshold, then performing the amplitude processing on the audio signal using, for example, a compressor will not cause any changes in the amplitude of the audio signal. In contrast, if an equalizer is placed at the former stage of the compressor so as to increase the amplitude level of the signal in a predetermined frequency range to a level above the threshold and then an amplitude processing is performed, the amplitude of the output signal varies in accordance with that of the input signal. In other words, the characteristics of the output signals are different whether the equalizer performs the amplitude processing before the amplitude processing of the compressor or not. Therefore, when processing signals using the audio mixer, the amplitude processing is usually performed on input signal at a fixed stage taking into consideration the influence of the amplitude processing.
In order to perform the amplitude processing on output signals having a variety of frequency characteristics and amplitude characteristics, it has been proposed to perform the effect processing by adding a switcher which allows switching of the dynamics to desired stages of the frequency processing and/or amplitude processing.
Referring to FIG. 3, in a mixer 1, an input trim 3 performs fine adjustment on an audio input signal S1 which is inputted from an input terminal 2 thereto, and outputs the adjusted signal to a frequency processing section 1A so as to perform the frequency processing for each frequency range.
The audio input signal S1, which is inputted via the input terminal 2, is sent out through a main signal line BL to a signal characteristic processing section which includes an amplitude processing section, such as a fader 7, and the frequency processing section 1A. When the signal passes through the frequency processing section 1A, a high-pass filter (HPF) 4 rejects a DC component of the signal and low frequency noise components such as the sounds of wind, and a low-pass filter (LPF) 5 rejects high frequency noise components. Further, an equalizer (EQ) 6 emphasizes or rejects, for example, the voices (vocal) of human beings and particular sounds of music instruments such as cymbals. After the frequency processing performed by the respective stages of the frequency processing section 1A, the fader 7 adjust the output level of the audio signal and an output terminal 8 outputs the adjusted audio signal.
The mixer 1 includes selector switches SW1 to SW7 which connect and disconnect between the input terminal 2, input trim 3, high-pass filter 4, low-pass filter 5, equalizer 6, fader 7, and output terminal 8. The selector switches SW1 to SW7 have their respective contacts connected to a dynamics 10 via matrix switchers 9A and 9B. The matrix switchers 9A and 9B are switched so as to place the dynamics 10 between any adjacent two of input terminal 2, input trim 3, high-pass filter 4, low-pass filter 5, equalizer 6, fader 7, and output terminal 8.
FIG. 4(A) illustrates the selector switches SW (SW1 to SW7) which selectively switch the main signal line BL that connects the input terminal 2, input trim 3, fader 7, output terminal, and the respective frequency processing sections, each of the selector switches having four contacts 11A, 11B, 12A, and 12B. For example, when the dynamics 10 is selected, the contacts 11A is connected to the contact 12A and the contact 11B is connected to the contact 12B as shown in FIG. 4(B). At the other stages where the dynamics 10 is not inserted, the contacts 11A are connected to the contacts 11B and the contacts 12A are not connected to the contacts 12B as shown in FIG. 4(C).
The dynamics 10 includes signal amplitude processing means such as a limiter and compressor that decreases an high output level, and an expander and a gate that cuts cross talk appearing in the low output level. Thus, using the matrix switchers 9A and 9B, the mixer 1 performs the amplitude processing on the inputted audio input signal S1 based on an output signal from any one of the input terminal 2, input trim 3, high-pass filter 4, low-pass filter 5, equalizer 6, fader 7, and output terminal 8, in order to output an output signal, that is, the mixer 1 sets the output level.
Since the mixer 1 having the aforementioned configuration inserts the dynamics 10 into a desired stage of the main signal line BL by switching the selector switches SW1 to SW6, the apparatus suffers from a problem that the audio input signal S1 is disconnected during a moment when the selector switches are switched and switching noise generated by the switchers is mixed with the audio signal which is being transmitted over the main signal line BL.
Also, the buses in matrix form in the matrix switchers 9A and 9B are complex in construction and costly accordingly. In order to prevent the audio input signal S1 from being disappearing during the switching operation, the matrix switchers 9A and 9B provided in the mixer 1 should be switched by a selection signal having a time constant. This makes the construction of the switcher more complex.