Time interleaving is a technique used to increase the overall sampling rate of a system by using multiple sampling elements in parallel. For example, multiple slicers/samplers may be used in parallel in a serializer/deserializer application, or multiple sub-analog-to-digital converters may be used in parallel in a digital signal processing based receiver. By using time interleaving, higher bit rates may be achieved while keeping the clock frequency lower than if time interleaving was not used. This may result in lower power dissipation in the system compared to not using time interleaving.
In an N-way time interleaved receiver, an overall sampling rate fs may be obtained by using N sampling elements, with each of the sampling elements sampling at a (Efferent, equally spaced, phase of a clock that has a clock rate of fs/N.
Implementing time interleaving may involve precise multi-phase clock generation, such as the creation of N clocks, with each one of the N clocks sampling at substantially exactly one sampling period 1/fs from each other.