Communication systems often transmit data with a clock embedded in a data stream, rather than being sent as a separate signal. When the data stream is received, a clock and data recovery circuit (CDR) recovers the embedded clock and retimes the received data to the recovered clock. Typically, a phase-locked loop (PLL) is used to perform the clock recovery operation. Such a PLL typically includes a phase detector, which receives the input data signal and a clock signal from a voltage-controlled oscillator (VCO). The phase detector generates an error signal, which is a function of the phase difference between the input data signal and the VCO clock signal. The phase detector may also include additional circuitry to generate the reconstructed data.
The phase detector, oftentimes a linear phase detector, is used to determine an optimal phase sampling point for the incoming data eye. However, such phase detectors rely on the matching of delays between data and clock paths. Accordingly, these phase detectors are notorious for having large phase offsets that change with process. Accordingly, a need exists to calibrate out systematic phase offsets of a phase detector.
In a typical high-speed linear phase detector, exclusive-OR (XOR) logic gates are used to generate output currents (I) that include an error pulse and a reference pulse that are provided to an integrating capacitor, C. The gain of the phase detector can be expressed as:
                              K          PD                =                                                            V                PD                                            Δ                ⁢                                                                  ⁢                ϕ                                      ⁢                          (              s              )                                =                      I                          2              ⁢                                                          ⁢              π              ⁢                                                          ⁢              sC                                                          [        1        ]            where Δø is a change in phase. However, the I/C ratio does not track well from silicon wafer to silicon wafer. Thus phase detectors fabricated on different wafers may have widely varying I/C ratios and accordingly varying gains. Such gain variations cause jitter transfer and jitter tolerance bandwidths to undesirably vary.
Conventional linear phase detectors also include latches to store and pass data. A delay inherent in the latch from the input clock to the Q output of the latch (i.e., a delay from the clock input to output of the latch) adds an offset to the error pulse generated in the phase detector. This is because the Q output of the latch is input to an XOR along with the incoming data. Typically, this offset is reduced by inserting a delay into the data path of the incoming data before it is input to the XOR. However, it is very difficult to match this delay to the clock-to-Q delay of the latch. These unmatched delays cause non-optimal phase sampling due to offsets in the phase detector.
A need thus exists to reduce gain errors of a phase detector and related components, such as an analog-to-digital converter (ADC). Furthermore, a need exists to reduce or eliminate offsets in a phase detector.