1. Field of the Invention
The present invention relates to pipeline analog-to-digital (A/D) converters, and in particular, to digital calibration techniques for such converters.
2. Description of the Related Art
Referring to FIG. 1, a typical stage 10 of a conventional pipeline A/D converter includes a sample and hold circuit 12, an A/D conversion circuit 14, a digital-to-analog (D/A) conversion circuit 16, a signal combining circuit 18 and an output buffer amplifier 20, all interconnected substantially as shown. By connecting N such stages 10 serially together a pipeline A/D converter is formed. A clock signal 9 synchronizes the operations of the sample and hold circuit 12, A/D converter 14 and D/A converter 16.
The analog residue signal 11 from the immediate upstream stage (i+1) is sampled and held by the sample and hold circuit 12. This analog sample voltage 13 is converted to a digital signal 15 (D.sub.i) by the A/D converter 14. This digital signal 15 is provided as the binary bit information for that stage (i) and is also converted back to an analog signal by the D/A converter 16. This analog signal 17 is summed with the original analog sample voltage 13. The resulting sum voltage 19 is then buffered by the output amplifier 20 to provide the residue voltage signal 21 for that stage 10. (A typical voltage gain for the output amplifier 20 is two.)
Referring to FIG. 2A, the A/D converter 14 can be implemented with a voltage comparator 14a which compares the sample voltage 13 to a reference voltage 13r. Referring to FIG. 2B, the D/A converter 16 can be implemented with a latching multiplexor 16a which, in accordance with the digital signal 15, selects between negative 15n and positive 15p analog voltages which are equal in magnitude for outputting as the analog voltage 17 to be summed with the sample voltage 13.
Such analog processing stages for pipeline A/D converters (often referred to as algorithmic A/D converters) have inherent imperfections which negatively affect the linearity and, therefore, the accuracy of such converters. For example, the number of linearity bits is determined by the native matching of the passive components and is usually limited to eight or ten bits. While it is possible to provide various forms of trimming of the analog circuitry which have been proven to work up to 16 bits of accuracy, such trimming procedures are very difficult and expensive.
Accordingly, a commonly used alternative approach is that of digital calibration and correction. In such an approach, the errors attributed to the analog components are identified and compensated within the digital domain. Examples of the techniques used in this approach can be found in the following references: S-H. Lee and B-S. Song, "Digital-Domain Calibration of Multistep Analog-to-Digital Converters," IEEE Journal of Solid-State Circuits, Vol. 27, No. 12, December 1992, pp. 1679-88; and A. N. Karanicolas, H-S. Lee and K. L. Bacrania, "A 15-b 1-Msample/s Digitally Self-Calibrated Pipeline ADC," IEEE Journal of Solid-State Circuits, Vol. 28, No. 12, December 1993, pp. 1207-15; the disclosures of which are incorporated herein by reference.
A major concern about such digital calibration and correction algorithms is that of the monotonicity of the transfer function. It has been shown that any linear correction algorithm with multiple internal representations of the same output code cannot guarantee the monotonicity of a converter transfer function. It has also been shown that local discontinuities in the monotonicity can occur within the output code even after digital correction.
Whereas the presence of a few signal states where the monotonicity has a local discontinuity may be acceptable in some applications, many closed loop control systems require virtually absolute monotonicity in order to avoid limit cycles. Further, local discontinuities in the monotonicity tend to be very difficult to detect by testing, can propagate toward the most significant bit and cannot be avoided by truncation since their positions are not known in advance.
As is well known, a common conventional digital calibration and correction technique involves the digital calibration of all of the transitions within the residue signal transfer characteristics. An analog input signal equal to the transition voltage is applied at the input of each stage and a decision is forced on both sides of such transition voltage. The resulting analog residue voltage is then measured by the remaining stages within the pipeline (assumed to be already calibrated or ideal). Based upon such measurements, correction coefficients for each transition are then stored in memory. Accordingly, each calibration step is equivalent to performing a digital shift of all of the residue segments along a common line (the ideal linear transfer characteristic).
Initially such a calibration algorithm may appear to preserve overall monotonicity since each step in the calibration procedure is monotonic. Indeed, this may be true if the analog calibration voltages are precisely at the exact voltage transition points. However, due to different offsets in the individual voltage comparator circuits, the actual transition points are offset, or shifted, from their ideal position. These offsets in the transition point, when combined with the inherent gain errors, lead to a calibrated transition which is not precisely equal to the actual transition gap. Therefore, the overall transfer characteristic has discontinuities in its monotonicity.
Accordingly, it would be desirable to have a digital calibration algorithm which truly preserves overall monotonicity of its transfer characteristic.