The present invention relates to an A/D converter and an A/D conversion method for converting an analog signal into a digital value.
FIG. 14 is a circuit diagram showing a configuration of a conventional A/D converter. In FIG. 14, the reference numeral 51 denotes an analog signal source for generating an analog signal V.sub.in to be converted. The reference numerals 52 and 53 denote constant voltage sources. The reference numeral 54 denotes a bank of resistors for generating reference voltages V.sub.r1 to V.sub.r7 by equally dividing a difference between the output voltages of the constant voltage sources 52 and 53. The reference numeral 55 denotes a bank of amplifier circuits. Each of the amplifier circuits amplifies a voltage difference between the voltage of the analog signal V.sub.in and the associated reference voltage V.sub.r1, V.sub.r2, V.sub.r3, V.sub.r4, V.sub.r5, V.sub.r6 or V.sub.r7. The reference numeral 56 denotes a bank of latch circuits. Each of the latch circuits amplifies the output voltage of the associated amplifier circuit into a digital value and then holds the digital value. The reference numeral 57 denotes an arithmetic circuit for encoding the output signals of the bank of latch circuits 56 into A/D converted values. The reference numeral 58 denotes a clock generator circuit for operating the bank of latch circuits 56 and the arithmetic circuit 57. The reference numeral 59 denotes an input terminal through which a clock is input as a reference of the output clock of the clock generator circuit 58. The reference numeral 60 denotes an output terminal through which the A/D converted values calculated by the arithmetic circuit 57 are output.
For example, it is assumed that the voltage of the analog signal V.sub.in is located between the reference voltages V.sub.r3 and V.sub.r4. In this case, in the first to the third amplifier circuits of the bank 55 of amplifier circuits, since the non-inverting input voltage (i.e., the voltage of the analog signal V.sub.in) is lower than the inverting input voltages (i.e., the reference voltages V.sub.r1 to V.sub.r3), the first to the third amplifier circuits output negative voltages. On the other hand, in the fourth to the seventh amplifier circuits of the bank 55 of amplifier circuits, since the non-inverting input voltage (i.e., the voltage of the analog signal V.sub.in) is higher than the inverting input voltages (i.e., the reference voltages V.sub.r4 to V.sub.r7). the fourth to the seventh amplifier circuits output positive voltages. In the bank 55 of amplifier circuits, the point, at which the polarity of the output voltage of an amplifier circuit is switched from positive into negative or from negative into positive, is variable depending upon the voltage of the analog signal V.sub.in in this way. Thus, the analog signal V.sub.in can be A/D converted based on this switching point.
The bank 56 of latch circuits amplifies the output voltages of the bank 55 of amplifier circuits to be logical voltages (V.sub.DD : 1, V.sub.S : 0) and holds the logical voltages. The arithmetic circuit 57 converts the values held in the latch circuits 56 into three-bit A/D converted values such as those shown in FIG. 14. More specifically, a voltage lower than the reference voltage V.sub.r7 is converted into "000", a voltage higher than the reference voltage V.sub.r1 is converted into "111" and voltages intermediate between the reference voltages V.sub.r1 to V.sub.r7 are converted into "001" to "110". In this example, the value held in the bank 56 of latch circuits becomes "0001111" (herein, it is assumed that when the output voltage of an amplifier circuit is negative, a value held in a latch circuit is "0" and that when the output voltage of an amplifier circuit is positive, a value held in a latch circuit is "1"), the analog signal V.sub.in is converted by the arithmetic circuit 57 into "100" and the data "100" is output through the output terminal 60.
However, such a conventional A/D converter has the following problems.
In a conventional A/D converter such as that shown in FIG. 14, the positive/negative polarities of the output voltages of the respective differential amplifier circuits for amplifying the voltage differences between the voltage of the analog signal V.sub.in and the reference voltages V.sub.r1 to V.sub.r7 are used as the information about an A/D conversion. In other words, the A/D conversion is performed based on the level relationship between the voltage of the analog signal V.sub.in and the reference voltages V.sub.r1 to V.sub.r7.
In such an A/D converter, the conversion precision is determined by a difference between two adjacent reference voltages, i.e., by the width of a scale used in dividing a voltage difference between the output voltages of the constant voltage sources 52 and 53. For example, in order to realize an eight-bit A/D converter, the voltage difference between the output voltages of the constant voltage sources 52 and 53 is required to be divided into 256 (=2.sup.8) scales. Assuming that the voltage difference between the output voltages of the constant voltage sources 52 and 53 is 2 V, a voltage per scale becomes about 8 mV.
Thus, in order to improve the conversion precision, a voltage per scale is required to be even smaller.
On the other hand, when the prior art A/D converter was described, the differential amplifier circuits were assumed to be ideal circuits. However, a real differential amplifier circuit has an offset voltage. Thus, if a voltage per scale is reduced, then the influence of the offset voltage increases correspondingly, thereby preventing the conversion precision from being improved.
Assuming that the offset voltage of a differential amplifier circuit is denoted by V.sub.OS, a substantial reference voltage becomes a sum of a reference voltage (V.sub.r3, for example) and the offset voltage V.sub.OS. In such a case, though the positive/negative polarities of the output voltages are theoretically switched at a point where the voltage of the analog signal V.sub.in becomes equal to the reference voltage V.sub.r3, the positive/negative polarities of the output voltages are actually switched at a point where the voltage of the analog signal V.sub.in becomes equal to a voltage (V.sub.r3 +V.sub.OS).
In an eight-bit A/D converter, an error per scale is defined as .+-.4 mV. Thus, a voltage per scale must be from 4 mV to 12 mV (i.e., 8.+-.4 mV). That is to say, in order to prevent the above-described problem, the offset voltage V.sub.OS must be within .+-.4 mV.
However, the offset voltage V.sub.OS of a real amplifier circuit is equal to or higher than .+-.10 mV (in the case of an MOS transistor). Consequently, in the case of using an MOS transistor, the prior art cannot realize an A/D converter having a precision of 8 bits or more.