An Analog to Digital Converter (ADC) is an electronic circuit used to convert an analog signal to digital. ADCs are widely used in communication systems, where analog data is transferred over a digital channel. The typical operation of an ADC includes receiving a clock signal, sampling an analog signal at an event of the clock (either rising or falling edge or both), and producing a digital representation of the sample.
Due to limitations of electronic components of the ADC, such as capacitor charging time, the rate at which an ADC can efficiently acquire a sample is limited. This presents a problem when the analog signal to be converted is of high frequency, which is a present standard in the industry.
Time Interleaved ADCs (TIADCs) present a solution to the above problem. A TIADC is a set of ADCs, hereafter referred to as sub-ADCs, which are multiplexed and interleaved in time in order to provide a sampling rate higher than that of the sub-ADCs. For a TIDAC consisting a set of N sub-ADCs, each with a sampling rate of Fssub′ and spaced evenly in time by
      1          N      *              F                  s          sub                      ,the net sample rate of the TIADC is N*FSsub.
Due to imperfections in analog and digital circuitry, the sub-ADCs cannot be precisely spaced by
      1          N      *              F                  s          sub                      ,causing errors in the sub-ADCs sampling instants referred to as “interleave timing errors”. These errors degrade the Signal to Noise and Distortion Ratio (SNDR) of a TIADC.
In addition, interleave timing errors may vary in a number of manners, such as:                differences between sub-ADCs;        differences between TIADCs;        differences between dies; and        differences between temperatures, processes, power supply voltages, etc., which appear over time.        
Calibration of interleaved timing errors and of the systems which produce them can be performed using a method called “foreground calibration”, according to which special test signals are introduced at the data signal input, and are tested for time optimization. This method introduces a period of time at which the TIADC is not resolving true input data signals, which is not acceptable for some applications. In addition foreground calibration introduces slowdown in the final data transmission.
Another existing method for calibrating interleaved timing errors is by using special circuitry, such as Digital to Analog Converters. This solution, however, is expensive in terms of die area and power dissipation.
An alternative method is a onetime calibration (e.g., at power-up) where the calibration scheme does not interfere with the transmission scheme once the latter has begun. This method however is not optimal, since the calibration will degrade over time, which entails SNDR degradation over time, when the errors themselves change over time, due to the fact that temperature or other conditions change.
“Design Considerations for Interleaved ADCs”, B. Razavi, IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 48, NO. 8, AUGUST 2013) discloses a method for measuring the interleave timing error for a TIADC consisting two sub-ADCs. This method is not sufficient for TIADCs of higher order, consisting more than two sub-ADCs.
It would be advantageous to calibrate time interleaved errors in high order TIADCs without interfering normal data transmission, while maintaining calibration throughout the entire transmission scheme, using an economical system.
It is therefore an object of the present invention to provide a method for calibrating time interleave errors in high order TIADCs, without interfering data transmission, while maintaining calibration through the entire transmission scheme using an economical system.
Other objects and advantages of this invention will become apparent as the description proceeds.