Due to its robustness and precision, digital signal processing (DSP) has replaced analog signal processing (ASP) in most technical fields, and enabled the development of information systems such as mobile communication systems and sophisticated medical aids. However, the real world is analog by nature, and there is therefore an increasing need for high performance analog-digital interfaces (ADI's) (in the simplest case realized using a conventional analog-to-digital converter).
The role of the ADI is to convert signals from analog to digital representation. This conversion is principally done in two steps; sampling and quantization. The resolution of the ADI is the number of bits used in the quantization. The data rate of the ADI is the number of samples produced per second on average.
Typically, the sampling is performed with a fixed sampling period, T. This type of sampling is called uniform sampling, and the data rate is simply 1/T. Processing as well as reconstruction are simplified if the signals have been sampled uniformly, and it is therefore assumed here that the ADI is to perform uniform sampling. Since ADIs incorporate quantizers, it is inevitable that errors are introduced in the conversions. In an ideal ADI, the quantization errors can be made arbitrarily small by increasing the resolution, i.e., by allocating more bits for the sample values. However, in a physical ADI, the situation is not as simple because, in such a physical device, several other types of errors will eventually dominate over the quantization errors. In other words, the effective (true) resolution is determined by the influence of several different physical phenomena, and it tends to decrease as the data rate increases.
A natural way of increasing the ADI performance is to operate several converters in parallel, enabling an increase in data rate and/or resolution. In a conventional time-interleaved converter, the data rate is increased whereas the resolution is ideally maintained (the same as in each sub-converter).
A drawback with parallel ADC:s is that the parallelization introduces channel mismatch errors that degrades the resolution. In order to obtain both high data rates and high resolution it is therefore necessary to incorporate additional digital signal processing that estimates and corrects for the channel mismatch errors. Errors introduced by the parallel structure are for example gain and/or offset errors, and timing errors between the different ADC:s. Correcting for the gain and/or offset errors are easily done in a precondition module, but the technical challenge lies in removing the timing errors between the different ADC:s.
In order to remove these mismatch errors (deskew the digitized signal), the timing errors between the different ADC:s must first be determined. These timing errors can then be used to define the filter coefficients of a reconstruction filter, adapted to remove the timing errors from the digitized signal.
One approach to determine the timing errors is to apply a known calibration signal, and compare the resulting digitized signal with the expected result. An example of this approach is given in the journal paper “A digital-background calibration technique for minimizing timing-error effects in time-interleaved ADC's” by H. Jin and E. K. F. Lee. Such an approach requires careful timing of input and output, in order to enable a correct comparison, and this makes the method very difficult to implement with high precision.
Instead, it has been proposed to estimate the timing errors from an unknown, but bandlimited signal. One example of such an estimation in a parallel ADC is given in WO 04/079917. In the system described in WO 04/079917, the digitized signal can be used to estimate the timing errors, as long as it is band limited to the system bandwidth. However, this requires feedback of the reconstructed signal to the estimator, so that each iteration of the timing error estimation is based on the current reconstruction.