As is known to a person skilled in the art, CT ΣΔ ADCs are frequently used in numerous domains and especially in wireless radio receivers (or transceivers) used in radio communication equipment, such as mobile phones, where selected analog radio signals need to be converted into digital signals before being demodulated.
Such converters are notably described in the document by K. Philips et al “A 2 mW 89 dB DR Continuous-Time ΣΔ ADC with Increased Immunity to Wide-Band Interferers”, ISSCC Dig. Tech. Papers, pp 86-87, February 2004, and in the patent document WO 01/03312.
These converters offer some meaningful advantages over discrete-time (DT) implementations, and notably an implicit anti-aliasing filter, no front-end sample and hold S/H (there is a S/H but it is located after the loop filter which therefore is not discrete-time but continuous-time and enables to build anti-aliasing property in the ΣΔ ADC loop), the absence of kT/C noise and speed advantages, which all lead to a lower power consumption.
Nevertheless, in baseline deep-submicron CMOS technology (with low voltage supply: for instance 1.2 V in CMOS90LP), the continuous time loop filter of a CT ΣΔ ADC is built with RC integrators which are very sensitive to process variations and temperature spread of their analog components.
In fact the time constant, and hence the unity gain frequency, of these RC integrators depends on their RC product and therefore on the types of their resistances and capacitors (for instance P+ poly or N+ poly resistances and fringe capacitors in case of CMOS technology) which are very sensitive to process variations and temperature spread. In CMOS technology, the process spread increases as the technology is scaling down. For instance, the worst case spread on the RC product is approximately +/−25% in 90 nm-CMOS technology, and approximately +/−40% in 65 nm-CMOS technology.
It can be shown that in the presence of a full-scale input signal, the RC time constant variations modify the CT ΣΔ ADC output spectrum in two different ways. Firstly, when the RC time constants of the integrators are too large, the quantization noise shifts to the bandwidth and reduces the signal-to-noise ratio (SNR) performance. Secondly, when the RC time constants of the integrators are too small, the loop filter becomes unstable because the noise transfer function is too aggressive. In both situations, the in-band noise (IBN) increases and consequently the signal-to-noise ratio decreases.
For instance, a 1-bit, single loop, feedforward CT ΣΔ ADC clocked at 288 MHz with 70 dB SNR in 4 MHz is suitable for highly digitized ZIF DVB H receiver. In this case, the simulated signal-to-quantization noise ratio (SQNR) is equal to 80 dB when the RC time constant is nominal, and if one takes into account the circuit noise (thermal noise, 1/f noise and clock jitter), then the nominal SNR is equal to 72 dB. Therefore, no more than +/−10% spread can be tolerated on the RC time constant. The spread on the RC product being +/−25% in the 90 nm-CMOS technology, this means that the RC time constant needs to be calibrated.
In the document of J. H. Shim et al, “A Hybrid Delta-Sigma Modulator with Adaptative Calibration”, in proc. IEEE ISCAS, May 2003, pp. 1025 1028, it has been proposed to use analog and digital integrators and to calibrate the digital integrators to match the analog integrators and then to keep good SNR performance. More precisely, the decimation filter output is monitored and the digital integrators are controlled, so that the SNR of the decimated output be maximized.
To simplify the SNR measurement during calibration, a special input pattern must be used. This input pattern is an impulse train whose fundamental frequency lies out-of-band. Because of this specific input pattern, the decimated output does not contain any signal component, so that the IBN must be estimated by calculating the variance of the output stream (a steepest descent algorithm updates the digital coefficients of the digital integrators to minimize the variance). Unfortunately this steepest descent algorithm has a very slow convergence. Approximately 400 iterations are necessary to converge to the calibration values, which takes too much time (for instance 91 ms for 400 iterations with 216 samples at 288 MHz (400*216/288.106=91 ms)) and then prohibits calibration before each use of the CT ΣΔ ADC.