1. Field of the Invention
The present invention relates generally to digital-to-analog converters, and more particularly to a method and apparatus for correcting nonlinearity in the digital-to-analog conversion process and/or in signal conditioning following the digital-to-analog conversion process. Specifically, the present invention relates to a method for correcting nonlinearity that does not require a linearity reference and is not limited to a particular kind of converter technology.
2. Description of the Background Art
High resolution digital-to-analog conversion (DAC) technology has become one of the key analog circuit technologies for digital audio and telecommunications applications. Precision of at least sixteen bits is readily achieved in monolithic form using oversampling converters, such as the delta-sigma digital-to-analog converter described in Sooch et al., U.S. Pat. No. 5,087,914. In such a delta-sigma converter, an interpolation filter receives a digital input signal at the input sampling rate F.sub.s, increases the sampling rate, and then filters all images and quantization noise at F.sub.s /2 and above. The output of the interpolation filter is received by a digital delta-sigma modulator that converts the output of the interpolation filter to a one-bit data stream. This one-bit data stream controls a one-bit DAC having only two analog levels. The analog signal from the DAC is received by an analog low-pass filter that provides the analog output of the delta-sigma digital-to-analog converter. Because the analog output is derived from the two analog levels of the one-bit DAC, the digital-to-analog conversion process is monotonic and highly linear.
Delta-sigma digital-to-analog converters are capable of providing precision greatly in excess of sixteen bits. At the higher resolution, however, non-linearity becomes significant with respect to the least significant bit of resolution. For instrumentation applications, the non-linearity represents an error or deviation from an accurate value. In audio or signal processing applications, the non-linearity causes harmonic and intermodulation distortion to appear in the converted signal. Such harmonic and intermodulation distortion may mask desired components of the converted signal.
Most known techniques for achieving high linearity in digital-to-analog conversion require some kind of highly linear calibration reference. A ramp function generator has been used as a linearity reference for calibrating a D/A converter. As described in Maio et al,. "An Untrimmed D/A Converter with 14-Bit Resolution," an R/2R ladder DAC has a duplicate set of switched current sinks providing a sub-DAC responsive to compensation data. The compensation data are read from a RAM that is programmed during a calibration cycle. In the calibration cycle, a counter circuit provides the digital input to the DAC, and the output of the DAC is compared to a highly linear ramp function generated by a Miller integrator using a styrol capacitor. The counter circuit is responsive to the comparison in order to program the RAM with the calibration data.
A low distortion sine wave oscillator has been used as a linearity reference for calibrating a DAC in a subranging analog-to-digital converter (ADC). As described in Evans, U.S. Pat. No. 4,612,533, digital calibration values for reducing harmonic distortion are computed by a microprocessor that performs a fast Fourier transform on digitized values when the low distortion sine wave oscillator is input to the analog-to-digital converter.
An analog-to-digital converter has been used as a linearity reference for calibrating a digital-to-analog converter. As described in Cataltepe et al., "Digitally Corrected Multi-Bit .SIGMA..DELTA. Data Converters," 1989 IEEE International Symposium on Circuits and Systems, Portland, Oregon, May 8-11, 1989, the circuit blocks of a multi-bit delta-sigma ADC are rearranged during calibration to form a single-bit delta-sigma ADC that is used to calibrate a multi-bit internal subconverter DAC. During calibration, the input signal to the single-bit delta-sigma ADC is a digital ramp generated by an N-bit counter and fed to the input of the multi-bit internal subconverter DAC. The output of the subconverter DAC is digitized by the single-bit delta-sigma ADC. The single-bit delta-sigma ADC provides non-linearity data that are stored in a memory.
The above methods employing a linearity reference suffer the problem that the precision of the calibrated DAC is limited by the precision of the linearity reference, and it is relatively difficult to obtain a good linearity reference.
A self-calibration technique has been devised for calibrating a switched-capacitor DAC that is internal to a successive-approximation ADC. As described in Welland et al., U.S. Pat. No. 4,709,225, it is desirable in such a DAC for the capacitors to form a binarily weighted sequence of values. The self-calibration technique includes sequentially connecting trim capacitors in parallel with a primary capacitor and determining as each trim capacitor is connected, whether the resultant parallel capacitance is larger or smaller than that of a reference capacitor. If the resultant capacitance is too large, the trim capacitor is disconnected, but otherwise it is left connected. The process is repeated, until each trim capacitor has been tried. The final resultant capacitance is the capacitance of the smallest capacitance value in the sequence. For adjusting the next-largest capacitance value in the sequence, the final resultant capacitance is connected in parallel with the reference capacitor to form a new reference capacitor, and the process is repeated to trim the next-largest capacitance value in the sequence, and so on, until all of the capacitance values have been trimmed.
Known self-calibration methods are limited to a particular kind of converter technology. Accordingly, there is a need for a method of correcting nonlinearity in a digital-to-analog converter that does not require a linearity reference and is not limited to a particular kind of converter technology.