The conversion of an analog signal into a digital signal has become a conventional operation in present-day electronic circuits, by virtue of standard commercially available components generally grouped together under the acronym ADC, for “Analog-to-Digital Converter”. A signal e(t) is represented, varying continuously in time and able to take any value in a form s(t) sampled in time. Each sample can take a finite number of possible quantized values and each value is encoded on a well defined number of bits. Each bit can take only two possible values, 1 or 0 for example.
Conventional ADCs provide precision levels that are satisfactory at relatively low input signal frequencies, of the order of a few tens or even hundreds of megahertz. This means that at these frequencies, the difference between the signal represented digitally at the output and the analog input signal is acceptable. But in the field of microwave frequencies, when the frequency of the input signal is of the order of a few gigahertz, the dynamics of conventional ADCs, i.e. their capability to sample/quantize the input signal both rapidly and accurately turns out to be markedly inadequate. First of all, this is due to the inadequate rise time of an internal component of ADCs called the sample/hold circuit. It may be difficult for a sample/hold circuit to stabilize an input signal with a view to quantizing it if it is at too high a frequency, the duration required for this stabilization hence being too long with respect to the sampling period. This introduces errors, i.e. digital samples can be unrepresentative of the analog signal. Each sample can then be encoded only on a reduced number of amplitude values. This intrinsically generates an error due to the lack of precision before the quantization of the amplitude of each sample. Consequently, the error inherent to the digitization method of a conventional ADC at high sampling frequency is the sum of the error described, related to the rapidity defect of the sample/hold circuit, and of the quantization rounding error which reflects the difference between the signal thus sampled/held and its quantized digital representation. This overall error is incorrectly referred to as “quantization noise” since, in practice, the part related to the quantization is in the majority (at least at low frequency). Thus, at high frequency, the difference between the signal represented digitally at the output and the analog signal at the input becomes non-negligible and the precision of the ADC is no longer satisfactory. In summary, the precision of conventional ADCs decreases when the frequency of the analog signal e(t) applied at their input increases. They are therefore not suitable for use in very high-frequency applications demanding good digital precision, such as radars for example.
A method called sigma-delta modulation provides for improving the precision of an ADC locally around a frequency, if necessary around a high frequency. The basic principle is to make the digital output signal vary arbitrarily, or to “modulate” it, so as to minimize the error for any spectral component contained in the relevant band (which depends on the use), even if it means that samples of the digital output signal can appear unrepresentative of the analog input signal. To this end, sigma-delta modulation requires by principle that the signal be strongly oversampled, which can be done only on a small number of bits. This amounts to improving the time-domain precision by cutting the signal into a large number of samples but, as explained earlier, at the cost of a reduction in amplitude precision due to the increase in sampling frequency. However, by relying on oversampling, the digital output signal can be modulated in order to minimize the power of this quantization noise in a defined frequency band.
In the frequency or spectral domain, it is commonly said that sigma-delta modulation makes the quantization noise “compliant”. This is because, the modulation of the digital output signal, which is adapted to the frequency of the input signal, amounts to minimizing the spectral density of the quantization noise around the frequency of the useful signal. In fact, the spectrum of the quantization noise must be made “compliant” with an ideal spectrum presenting a trough near the frequency of use. Thus, even if an overall significant quantization noise is intrinsically generated in sigma-delta modulation, and this regardless of the frequency of the signal at the input, at least this quantization noise is of low power close to the frequency of use.
A sigma-delta modulator can be implemented from an ADC converter controlled conventionally in a feedback loop, with a view to lessening the effect of its quantization noise on its digital output. In this case, a digital-to-analog converter, hereafter referred to as DAC converter, provides for converting the digital output signal from the ADC converter back to analog with a view to subtracting it from the input signal, through the principle of closed loop control. An amplifier and a loop filter are used to circumvent the drawback of conventional ADCs by combining high frequency and fine resolution.
During the design of such a modulator, it is necessary to adjust the loop in order to ensure that its frequency response enables stable operation. The stability of the loop is characterized in the frequency domain by an examination of the complex open-loop response, the response having to meet the Nyquist criterion. To this end, it is necessary to have available a digital network analyzer in order to carry out this measurement, then to transfer this measurement to a display device.
Following this initial adjustment, frequency, phase and amplitude drifts appear notably due to ageing of components of the modulator and temperature variations. These drifts are referred to as “offsets” hereafter in the description. For example, if a variation in the loop delay involves a phase offset between −π and π in the loop bandwidth, it is probable that the modulator has become unstable. On the other hand, the passage of the gain through 0 dB must take place at frequencies at which the phase margin is maximum.
It is imperative during operation to compensate regularly for these offsets.