The advent of the digital age established, and continues to create, advancements over analog design in such technological categories as computing, communications, and electronic recreation. Access to these technologies, therefore, is becoming increasingly affordable and realizable through digital innovation.
The digital age, however, has not obviated the need for analog circuitry. Consequently, both Analog to Digital Conversion (ADC) and Digital to Analog Conversion (DAC) technologies are very much in demand in order to bridge the gap between the analog and digital domains.
DAC technologies are required, for example, when digital information is required to control an analog component. Accordingly, control loops often incorporate digital computation circuitry to compare a reference signal with a generated signal in order to calculate a digital error between the two signals. Often, the digital error signal is then applied to an analog correction component, such as a Voltage Controlled Oscillator (VCO) or a Current Controlled Attenuator (CCA), to correct the error. As such, a DAC is then required to convert the digital error signal into an analog form suitable for use by the analog correction component.
Generally speaking, digital to analog conversion is accomplished through the scaling, e.g., division or multiplication, of a reference signal, e.g., voltage, current or charge, into quantized signal segments. Each segment may then be combined in response to an applied input code to form the analog output signal. For an ideal DAC, sequencing the input code from an all logic zero value to an all logic one value renders a rising (or falling) analog staircase waveform having equal magnitude steps, i.e., a monotonic waveform.
Once the monotonic waveform is smoothed, it forms a perfectly straight line having a constant slope at every point along the line. Each step of the staircase waveform represents a Least Significant Bit (LSB) having a magnitude equal to: LSB=FSR/(2M−1), where FSR is the Full Scale Range of the DAC output signal and M is the resolution of the DAC in bits.
For a non-ideal DAC, however, Differential Non-Linearities (DNL) and Integral Non-Linearities (INL) perturb the staircase waveform and thus adversely affect the linearity of the DAC. DNL, for example, affects the magnitude of each step, while INL affects the straightness of the staircase waveform when smoothed. Both parameters, therefore, contribute to the inaccuracy of the static code conversion and influence the quality of the dynamic analog output.
While design constraints for the DNL specification may be architecturally relaxed by employing thermometer or segmented structures, the INL specification is strongly coupled to the static errors of the analog components that generate the output signal. In order to counteract the static errors, two conventional approaches have been employed. First, an intrinsic DAC design approach is used, which employs large analog devices to reduce the static error to acceptable levels. Alternatively, a self-calibrating design approach is used, which employs additional error acquisition circuitry, calibration logic and operations, and error correction circuitry to improve the linearity.
In the self-calibrating design approach, a self-calibration technique is applied to each individual analog element that is used to produce the output signal, through the use of individual calibrating DACs (CALDACs), or biasing capacitors. The calibration scheme uses components that sense a difference between a reference and a calibrated element, such as through the use of a single-bit ADC, i.e., a comparator, or a multiple-bit ADC. However, the sensing components may cause problems due to their substantially unavoidable input offsets.
Conventional input offset cancellation techniques are then employed, whereby the signal being calibrated and the reference signal are applied to the inputs of an ADC during a first measurement. The inputs are then swapped, a second measurement is taken, and a mean value is calculated from the first and second measurements. Such a cancellation approach, however, places stringent accuracy requirements on both the measurement components and the calibrating elements.
Still other calibration techniques involve the calibration of only one type of current cell, e.g., the thermometer current cells of a thermometer DAC architecture, or segmented DAC architecture, such as the calibration method disclosed in Radulov et al., U.S. Pat. No. 7,076,384, issued Jul. 11, 2006, which is incorporated herein by reference in its entirety. The calibration method disclosed in Radulov institutes a two-part calibration, whereby a temporary signal is first calibrated to a reference signal and the main signal to be calibrated is then calibrated to the temporary signal. In addition, the sign of the quantization error is controlled during the two-part calibration, so as to minimize quantization error effects on the calibration. Such a calibration method, however, exploits the fact that all thermometer current cells to be calibrated are nominally equivalent to each other and thus, does not facilitate calibration of the non-equivalent, binary current cells that exist in a binary, or segmented architecture.
Efforts continue, therefore, to provide calibration techniques for all current cells of a current steering architecture, whether the current steering architecture employs thermometer, binary, or segmented current cell combinations. Such a calibration technique would be substantially free of any input offset error caused by the measurement ADC, since all DAC current cells would be calibrated. In addition, it is believed that the advantages of calibrating all DAC current cells also include: improved portability of the current-based circuit, e.g. DAC design, into other silicon technologies; improved over-all accuracy and reduced sensitivity to manufacturing tolerances; improved chip yield, etc.