This invention relates to calibration techniques and more particularly to calibration techniques for analog to digital converter circuits.
As is known in the art, radar and communication systems generally include analog receivers which receive an input RF signal, filters and possibly time-gates said signal and downconverts the signal to a lower frequency signal generally referred to as an intermediate frequency (IF) signal. One characteristic of a receiver is its dynamic range which can be described as the difference between the maximum and the minimum signal levels to which the receiver can provide a linear response.
Many radar and communication systems also include advanced digital signal processors. It is generally desirable to convert the analog signals to a digitized representation of said analog signals at as high a frequency as possible with given analog to digital converter techniques. Such digitized signals are fed to the digital signal processor. Thus, the analog receiver provides analog signals having a wide dynamic range directly to an analog to digital converter circuit (hereinafter ADC).
The ADC is fed the analog signals and provides, in response thereto, a digitized output signal. Ideally the digitized output signal provides an accurate representation of the analog input signal. In practice, however, the digitized signals from the ADC do not accurately represent the analog signal.
That is, the ADC typically has a dynamic range which is less than the dynamic range of the receiver and, consequently, the analog signals fed thereto. When fed analog input signals having a large amplitude for example, the ADC provides a digitized output signal having harmonic distortion. That is, the ADC fails to provide a linear response to signals fed thereto. Furthermore, because the ADC provides a discrete voltage level for a continuous range of analog voltage levels fed thereto, there exists a so-called quantization error which may be defined as the difference between the analog value and its quantized representation. These are sources of error in radar and communication systems. Thus the ADC limits the performance of radar and communication systems.
Nevertheless, the digitized output signal is fed to a digital signal processor as is generally known. Thus the ADC receives analog signals and subsequently provides digitized output signals to other portions of the radar system or communication system.
However, in many applications such digital signals are often distorted. The quantization error in the ADC may be reduced by providing an ADC having a large number of bits. This technique, however, fails to reduce errors due to harmonic distortion. It is known in the art that calibration techniques can be used to provide ADCs having a linear response to analog signals fed thereto and thus reduce the harmonic distortion of the ADC.
One paper entitled "A Phase Plane Approach to the Compensation of High-Speed Analog-to-Digital Converters" by T. A. Rebold and F. H. Irons published in the 1987 IEEE International Symposium on Circuits and Systems describes a technique to calibrate an ADC. In this technique, a signal source sequentially provides a plurality of sinusoidal calibration signals (i.e. a sinewave signal) to the input port of an analog receiver. An ADC is coupled to the output port of the analog receiver. Each one of the plurality of calibration signals should have a frequency which is synchronous of a submultiple of the ADC sample rate. The signal source provides the calibration signals having frequencies corresponding to the maximum frequency in the application band and having distortion sidebands which are lower than the maximum allowable system specification. The ADC receives the sinusoidal calibration signal from the analog receiver and provides a distorted digitized calibration sinewave at its output terminal.
During the calibration, a compensation processor (e.g. the CPU of a digital computer) provides a reference sinewave and subtracts the distorted sinewave from the reference sinewave. That is, the compensation processor subtracts each digitized signal provided at the output port of the ADC from a corresponding point of the reference signal. The amplitude and time delay of either the distorted or the reference sinewave are adjusted to minimize the difference between the two signals. The compensation processor also computes the slope of the distorted sinewave at desired time intervals.
Noise, generally referred to as dither noise, having a voltage level corresponding to the quantization noise power of the ADC is added to the distorted sinewave to randomize the quantization error of the ADC. The measurement is therefore performed several times to statistically average the results and thus remove the random errors induced onto the calibration sinewave by the so-called dither noise.
The compensation processor computes the value of the average error between the distorted and reference sinewaves. The compensation processor then stores the average error value as a compensation value in a compensation memory (e.g. a random access memory or RAM). In the above technique, the amplitude and the change in amplitude with respect to time (i.e. the slope) of the output signal at the output port of the ADC are used to provide the addresses to the memory location in which a compensation value is stored. That is, the amplitude and slope taken together correspond to an address location of the compensation memory. The compensation value corresponding to the particular amplitude and slope values is thus stored in the corresponding memory location. Thus the compensation processor provides compensation values to the compensation memory by measuring the difference between the digitized calibration signal and the reference signal at the output port of the ADC.
In this calibration approach each sinewave provides relatively few compensation values to the compensation memory. Thus, many sinewaves having different amplitudes and frequencies are required to provide each memory location of the compensation memory with a compensation value. When the receiver is operated in the receive mode, the ADC provides at its output port digitized output signals having errors. The amplitudes and the slopes of the digitized output signals correspond to addresses of memory locations in the compensation memory. The compensation value stored in the memory location of the compensation memory is provided to the output port of the ADC. The compensation value is added to the digitized output signal having errors. The compensation value thus compensates the errors of the digitized output signals.
This calibration technique provides compensation for errors resulting from both integral non-linearity such as third order distortion as well as differential non-linearity such as errors in ADC quantization levels. Furthermore, distortion related to the input signal slew rate can be corrected.
However, one problem with the conventional calibration approach is that it provides an improvement in the dynamic range of the ADC only for analog signals having a frequency at the frequency of the calibration signal. Thus, it would be desirable to provide a technique which improves the dynamic range of the ADC for signals having frequencies other than the calibration signal frequencies.
A second problem with the conventional approach is that it requires a large amount of time to provide the compensation memory with an adequate number of compensation values. This is because each single sinewave calibration signal provides the compensation memory with relatively few compensation values. Thus many single sinewave calibration signals each having a different amplitude and or frequency are required to provide the compensation memory with an adequate number of compensation values.
Many applications, particularly radar system and communication system applications, must operate on a real time basis. Thus it is desirable to minimize the amount of calibration time.
The amount of time required to fill the compensation memory may be reduced by thinning, that is, by providing the compensation memory having fewer compensation values. The compensation processor could then interpolate between compensation values to provide estimates of the missing compensation values. However, thinning is applicable only if the compensation values in the compensation memory are relatively predictable in that region of the compensation memory having relatively few compensation values. Thus, it would be desirable to provide the compensation memory having a uniform distribution of compensation values to minimize the amount of thinning and interpolation.