The present invention relates to waveform digitizers in general and in particular to a system for measuring the effective resolution of a digitizer based on its output data sequence in response to an input sinewave.
A typical waveform digitizer periodically samples and stores the magnitude of an input signal and converts the stored samples to a waveform data sequence representing the time varying magnitude of the input signal. The "effective bits of resolution" of a digitizer are the number of bits of each element of its output waveform data sequence which reflect the magnitude of the input signal and which are substantially unaffected by noise. Random noise may cause variation in the timing of each sample, in the stored sample voltage, or in the magnitude of reference signals used in digitizing the sample voltage, all of which may cause random variations in the least significant bits of the digitizer output. Thus, although a particular digitizer may produce eight bit output data, when the three least significant bits vary with quantization noise in the system, the digitizer has only five effective bits of resolution.
A commonly used and more rigorous definition of effective bits of resolution is based on the assumption that the quantization noise is uniformly distributed thrughout the range of a digitizer and that quantization errors from sample to sample are statistically independent. Based on these assumptions, the effective bits indication B of resolution of a digitizer is given by: ##EQU1## where FullScale is the value of the full scale output of the digitizer, and RMSE is the "root mean square error" of the digitized signal, ##EQU2## where N is the number of samples in the waveform data sequence produced by the digitizer in response to an input signal, s.sub.k is the value of the k.sup.th element of the waveform data sequence representing the value of the k.sup.th sample of an input signal, and s'.sub.k is the actual magnitude of the input signal at the time the k.sup.th sample was taken. To measure the effective bits of resolution of a digitizer, a test signal which varies in magnitude over the full scale input range of the digitizer is applied as input to the digitizer and the known behavior of the input signal is compared to the output data sequence produced by the digitizer in accordance with the above-described equations. Unfortunately the method of the prior art relies on use of a test signal having a variable magnitude that can be accurately controlled preferably in a continuous fashion over the input range of the digitizer, and signal generators capable of producing such test signals are expensive and usually require frequent calibration.