All digital systems that interact with the real world must convert continuous analog signals into a discrete representation and/or convert those discrete representations back into continuous signals. Devices that bridge the gap between the analog and digital world are known as data converters. Not surprisingly, digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) are employed in a wide variety of applications including telecommuncations, medical imaging, consumer electronics, and general purpose measurement. Systems comprising DAC and ADC components can be characterized by their sampling rate, which is a measurement of how frequently the system converts an analog voltage to a digital sample or a digital sample to an analog voltage. The capacity or bandwidth of systems analyzing incoming analog waveforms is limited by the sampling rates of component ADCs.
A current approach to increase the overall sampling rate of a data conversion system is to interleave multiple ADCs or DACs. Such systems interleave M individual sampling slices of sample rate fs, to yield a converter with an aggregate sample rate of M multiplied by fs. This technique is used to increase the bandwidth of both monolithic data converters that interleave more than one sampling slice and data conversion systems that interleave more than one data conversion chip. Interleaving is employed in several ADCs today such as, for example, Agilent Technologies, Inc.'s 4 GSa/s and 20 GSa/s data converters. As of 2003, Agilent has designed ADCs incorporating up to 80 separate sampling slices running at an aggregate rate of 20 Gigasamples/second (GSa/s).
While interleaving components such as ADCs or DACs is a powerful technique for increasing the maximum sampling rate of a signal processing system, the performance of interleaved converters is limited by offset and gain mismatches as well as by timing errors between interleaved slices. The calibration of both voltage and sample clock timing is critical to increasing the maximum sample rate without significantly degrading accuracy. In general, it is sufficient to align the fastest slewing signal when interleaving to within ½ of a least significant bit (LSB). For oscilloscope applications, timing errors must be less than 0.4% of the period of the fastest input signal. One method for calibrating highly interleaved converter systems applies an external signal, captures that signal in memory, and then processes that signal to determine the relative time offsets of the sampling slices. There are two general correction approaches once the timing offset errors are known. One approach builds on-chip time delay circuits that fine-tune sample clocks to remove measured time offsets. The other approach digitally corrects for sampling time errors by interpolating the captured samples to yield an estimate of sampled data. Conventional implementations of the time offset measurement required for either method require significant amounts of on-board or on-chip high-speed memory. Significant memory requirements impact the physical size and cost of manufacturing a data conversion system.
During a typical foreground calibration, a test signal is switched in to apply a dedicated calibration signal to the converter system. The entire response for some time interval is captured in memory present on or external to the data converter. The response to the calibration signal source is then transformed into the frequency domain to compute per slice time offset. Prior calibration signals include periodic signals (sinusoids, square waves) or non-periodic (ramps). One conventional approach to time offset measurement follows:                1. Apply sinusoid of frequency F, where the sinusoid is not phase locked to the converter clock;        2. Capture N multiplied by M consecutive samples of the ADC response, where M is the number of slices in the interleaved converter and N is the number of samples desired per slice;        3. De-interleave the time record into M separate records of N samples each;        4. Perform a fast Fourier transform (FFT) on each of the M records;        5. Find the phase of the FFT bin nearest to the stimulus sinusoid of frequency F;        6. Compare the phases found from each of the M slices to determine the relative phase offset of each slice; and        7. Convert phase offset into time offset using the calibration signal frequency and converter sample rate.        
Three specific implementations of foreground calibration are described in U.S. Pat. No. 5,294,926 to Corcoran, entitled “Timing and amplitude error estimation for time-interleaved analog-to-digital converters,” U.S. Pat. No. 4,763,105 to Jenq, entitled “Interleaved digitizer array with calibrated sample timing,” and U.S. Pat. No. 6,269,317 to Schachner et al. entitled “Self-calibration of an oscilloscope using a square-wave test signal,” the disclosures of which are herein incorporated by reference. Methods incorporating background calibration allow the converter to operate normally and the converter auto-calibrates on its own signal.
In the process of calibration, long capture records are required to average out uncorrelated noise sources, thereby improving measurement accuracy. Unfortunately, long capture records demand more high speed sample storage, such as high speed RAM, and sample storage is limited in high-speed ADCs. Off-chip methods for timing calibration that use memory external to a data converter tend to be more computationally complex and necessarily much slower. A direct implementation of these off-chip methods for on-chip calibration is inefficient and computationally intensive.
Lastly, although few data converters have on-chip timing calibration systems, some already incorporate timing adjustment circuits. A remaining hurdle in implementing on-chip timing calibration is determining those timing adjustments to be made by the timing adjustment circuits.