1. Field of the Invention
This invention concerns an audio signal processing device which performs virtual acoustic image localization processing, an angular velocity sensor interface device, and a signal processing device.
2. Description of the Related Art
In recent years, numerous audio and image reproduction devices have been proposed which use a motion sensor to perform virtual acoustic image localization processing. For example, out-of-head virtual-acoustic image localization headphones have been proposed in which the angle of rotation of the listener's head is detected, and signal processing is performed to localize, outside the head, the virtual acoustic image of an audio signal corresponding to angle data detected by digital signal processing.
Below, FIG. 6 through FIG. 10 are used to explain such a conventional audio signal processing device which performs signal processing in order to localize, outside the head, the virtual audio image of an audio signal corresponding to angle data. FIG. 6 is a block diagram of an audio signal processing device having a rotation angle detection function, designed so as to localize the virtual audio image such that the position of localization of the reproduced acoustic image when the audio signals are reproduced by headphones is the same as that for two reproducing speakers, placed before the listener.
In FIG. 6, when the headphones 2, having left and right speakers 2L and 2R and on which is mounted an angular velocity sensor 1 to detect rotation angles, undergo rotational motion due to rotation of the head of the listener, the angular velocity sensor 1 outputs an analog detection signal with voltage proportional to the angular velocity. This detection signal from the angular velocity sensor 1 passes through a band-limiting filter 3 which removes unnecessary high-frequency components, and is supplied to an A/D converter 4 which converts analog signals into digital signals.
Digital detection signals, obtained on the output side of the A/D converter 4 by digitizing analog detection signals, are supplied to the microprocessor 5. In this microprocessor 5, these digital detection signals for the angular velocity are integrated and processed to obtain angle data. In this microprocessor 5, the rotation angle for actual localization of the acoustic image is calculated from this angle data, and corresponding signal processing data is supplied to the signal processing circuit 6.
On the other hand, audio signals from the sound source, supplied to the audio signal input terminals 7 and 8, pass through the A/D converters 9 and 10 respectively for conversion from analog signals into digital signals, and are supplied to the digital signal processing circuit 6.
In this digital signal processing circuit 6, audio processing is performed in order to localize, outside the head, the virtual acoustic image of the necessary audio signals corresponding to the angle data calculated by the microprocessor 5, and the resulting right and left audio signals are supplied to D/A converters 11R and 11L, which convert digital signals into analog signals.
The right and left audio signals resulting when analog signals are converted by the D/A converters 11R and 11L pass through the power amplifiers 12R and 12L respectively, and are supplied to the right and left speakers 2R and 2L of the headphones 2, so as to apply signals which localize an optimal virtual acoustic image outside the head of the listener listening to the output. The angular velocity sensor 1 is mounted on the headphones 2, so as to detect rotations of the listener's head.
FIG. 8 shows this digital signal processing circuit 6, divided into the part the characteristics of which change according to movements of the listener and the part the characteristics of which do not change. In FIGS. 8, 7a and 8a denote input terminals to which are supplied, respectively, digitized audio signals from the audio input terminals 7 and 8; the digital audio signals supplied to this input terminal 7a pass through the digital filter 13 and are supplied to the adder 17, and audio signals supplied to this input terminal 7a pass through the digital filter 14 and are supplied to the adder 18.
Further, digital audio signals supplied to the input terminal 8a pass through the digital filter 15 and are supplied to the adder 17, and digital audio signals supplied to this input terminal 8a pass through the digital filter 16 and are supplied to the adder 18. In this case, the digital filters 13, 14, 15 and 16 comprise, for example, FIR filters.
In the drawing of the principle of acoustic image localization shown in FIG. 7, the digital filters 13, 14, 15 and 16 respectively realize transfer functions HRR, HRL, HLR, and HLL from the speakers SL and SR to both the ears, for the case in which the listener M is facing in a fixed direction (for example, the forward direction, that is, the direction facing the midpoint between the speakers SL and SR).
The outputs from the digital filter 13 and digital filter 15 are added by the adder 17, and this addition signal is supplied to the time-difference application circuit 19; the outputs from the digital filter 14 and digital filter 16 are added by the adder 18, and this addition signal is supplied to the time-difference application circuit 20. The output signals from these time-difference application circuits 19 and 20 pass through the level-difference application circuits 21 and 22, and are supplied to the D/A converters 11R and 11L, respectively.
Here, changes in the transfer function resulting from movements of the listener's head are effected through control signals which focus on time differences and level differences in the signals reaching both ears, supplied to the control terminals 19a and 20a of the time-difference application circuits 19 and 20 respectively and to the control terminals 21a and 22a of the level-difference application circuits 21 and 22 respectively. In this way signals are simplified; for example, when the listener's head is facing the forward direction and rotates in the right direction, signals reaching the left ear arrive earlier, compared with the original state, and signals reaching the right ear arrive later, compared with the original state.
Moreover, the left ear approaches the sound source (the speakers SL and SR), and the right ear recedes from the sound source, so that the level of signals reaching the left ear is high compared with the original state, and the level of signals reaching the right ear is low compared with the original state. Hence by using a microprocessor 5 to control only this change with respect to a reference position, a dynamic transfer function can be simulated.
The delay time applied by the left-side time-difference application circuit 20 is represented by the characteristic curve Tb, shown as a dash line in the delay time characteristics of FIG. 9; the delay time applied by the right-side time-difference application circuit 19 is represented by the characteristic curve Ta, shown as a long and dash line in the delay time characteristics of FIG. 9.
The characteristic curves Ta and Tb are curves having completely opposite directions of increase and decrease with respect to the direction of rotation of the head of the listener M. As a result, even when headphones are used, time differences from the sound source to both ears are applied to the headphone reproduction signals similar to the sound differences when listening to sound from a sound source placed within a 180° range in the forward direction while turning the head right and left.
The level difference applied by the left-side level-difference application circuit 22 is represented by the characteristic curve La, shown as a long and dash line in the relative level characteristics of FIG. 10; the level difference applied by the right-side level-difference application circuit 21 is represented by the characteristic curve Lb, shown as a dash line in the relative level characteristics of FIG. 10. This FIG. 10 shows levels relative to the state in which the head rotation position is 0° (the forward direction).
The characteristic curves La and Lb are curves having completely opposite directions of increase and decrease with respect to the direction of rotation of the head of the listener M. That is, in the level-difference application circuit 22 the level changes of the characteristic curve La are applied, and in the level-difference application circuit 21 the level changes of the characteristic curve Lb are applied, so that sound volume changes similar to the case of listening to an actual sound source in the forward direction are applied to the headphone reproduction signals as well.
The above explanation has described a method of localizing an acoustic image before the listener M; by reversing the directions of change of the characteristic selected by the direction of rotation, however, an acoustic image can also be localized behind the listener M. Further, processing can also be performed for an arbitrary number of channels for a plurality of sound sources.
Hence it is possible to localize high-quality virtual acoustic images both before and behind the listener M.
However, the rotation angle interface used in the above-described configuration requires that a band-limiting filter 3, A/D converter 4, offset-removal filter, and other additional circuits be added externally to the microprocessor 5; moreover, it is necessary to send the angle data to the digital signal processing circuit 6 which performs signal processing using the calculated angle data.
Also, as described above, when an A/D converter is provided separately, and rotation angles are detected, scattering and fluctuations (temperature drift and similar) in DC offsets characteristic to the sensor, amplifier, and A/D converter occur, and consequently problems arise such as the inability to calculate accurate angles and displacements, and the occurrence of overflow in the interface unit due to a DC offset.
Further, in the above configuration, the circuit scale is large and the mounting area is considerable, and there is the additional problem of high cost. Also, sensor detection and signal processing using detection values are performed by separate devices, specifically, by the microprocessor 5 and the digital signal processing circuit 6, so that communication processing between them is necessary.