Time-to-digital converters (TDC) are used in a majority of Digital Phase Locked Loop architectures to quantify the phase distance between two signals, the high frequency local oscillator signal (usually produced by a voltage controlled oscillator (VCO)) and a lower frequency reference clock signal (almost always a signal from a crystal-based oscillator). Quantization error in the converter introduces noise into the process that is directly proportional to the quantization error. Minimizing the quantization error is important to attain low in-band phase noise. A good rule of thumb in this regard would be that the quantization resolution should be lower than the close-in phase noise level of the reference (REF) clock signal. On other hand, the quantifier (or TDC) should feature a sufficiently large dynamic range, which should extend to more than one period at a lowest required VCO output frequency (over any process and any environment variations, of course).
A straight-forward approach that is employed in the industry due to its relative simplicity is use of a Flash-TDC, which utilizes several sampling elements that are equal in number to the number of quantization levels. For a system that has a large dynamic range and a fine resolution (which is required to adhere to perspective wireless radio standards), a large ensemble of sampling elements is required. A plethora of design problems often stems from the use of a large ensemble, ranging from the large chip area occupied by the device to high peak-power consumption profiles of the device. Such a large sampling ensemble device produces current spikes as a result of its power consumption. The current spikes are especially difficult to accommodate and mask, for example, to restrict the spikes from introducing a significant change in the supply voltage. It's for that reason that the signature of the TDC sampling operation is well felt all over the integrated circuit in form of spurious signals that plague both the transmit and receive chains.
Two alternative approaches to increasing the size of the sampling ensembles to obtain increased dynamic range have been presented in the literature. The first approach implements the idea of element reuse, turning the linear delay line of the TDC into a ring—hence its name (R)ing-TDC. A Ring-TDC is particularly cumbersome to design in terms of timing scheme and decoding mechanism plus it has a higher 1/f noise content, but a Ring-TDC does allow for considerable savings in terms of sampling ensemble size. The second approach introduces the idea of phase prediction for which the edge of the VCO signal is localized by prediction and is juxtaposed to a properly delayed replica of the reference signal. This scheme is however sensitive to the prediction quality and requires cumbersome calibration in order to cancel out the integrated non-linearity inherent with every TDC.