1. Field of the Invention
The present invention relates to test and measurement of signal timing jitter, especially for high-speed digital signals and circuits.
2. Description of Related Art
As the data rate of integrated circuit (IC) signals increases each year, it becomes more difficult to accurately measure timing parameters of the circuit signals.
Jitter is an especially important parameter that is more complex and expensive to test at higher frequencies, to the extent that it is sometimes impractical to test on every circuit manufactured. For very high speed data transmission (greater than one gigabit per second), the bit error ratio (BER) is typically specified as less than 10−12. Measuring this BER is impossible in a reasonable production test time (less than a few seconds), so measuring the jitter that causes bit errors is often the only alternative. However, the jitter standard deviation that corresponds to this BER is typically less than ten picoseconds, and is extremely difficult to measure accurately or quickly—a one picosecond error may correspond to 50% error. Measuring peak-to-peak jitter is unreliable because it is very dependent on the number of samples and measurements are not very repeatable (they may have variance that exceeds 50%) because single-shot events greatly affect the measurement.
Jitter is the variation in the rising and/or falling edge instants of a signal relative to the ideal times for these instants. FIG. 1 shows an example waveform that has jitter—its rising and falling edges fall at different times relative to a constant unit interval (UI); the differences are denoted in the figure as t0, t1, t2, . . . , t5. The constant intervals are the ideal times. Peak-to-peak jitter for megahertz (MHz) signals is typically less than a few nanoseconds (ns), and for gigahertz (GHz) signals, it is typically less than a few tens of picoseconds (ps). Equipment that can measure picosecond jitter is typically quite bulky (more than a cubic foot), and connections between this equipment and a circuit-under-test (CUT) must be made very carefully to minimize the effect on the signal under test and the measurement accuracy.
Oscilloscopes measure jitter by triggering on a first transition of the signal under test, and then capturing subsequent samples of the signal at a very high, effective sampling rate compared to the signal frequency. Timing measurement units (TMUs) measure jitter by phase locking their internal PLL to the signal under test, and then measuring each of the signal's transition time deviations (t0, t1, t2, . . . , t5) with a precision delay line. Some oscilloscopes also use a PLL, and sometimes it is implemented in software (a “golden” PLL) that analyzes a previously captured set of data points. Spectrum analyzers measure jitter by analog demodulating each portion of the high frequency signal's bandwidth (whose total bandwidth of interest is centered around 1 GHz, for example) to a constant low center frequency (zero or 100 kHz, for example) and continuously measuring the phase and/or magnitude of the resulting continuous-time low frequency signal as the demodulating frequency is swept from one end of the total bandwidth to the other end. Connecting measurement equipment to a gigahertz signal typically affects the signal's signal level because of the non-infinite AC impedance of the connection, and affects the signal's jitter because each change in characteristic impedance along the signal's path to the equipment can cause reflections and changes in the signal's transition times.
A very high speed, low jitter oscilloscope or off-chip time measurement unit, which can cost more than $30K, or a long test time on automatic test equipment (ATE) that typically costs more than $1M.
Several built-in self test (BIST) circuits for measuring jitter have been reported (for example, U.S. Pat. No. 6,396,889 by Sunter et al, and U.S. Pat. No. 6,295,315 by Frisch et al) but they require a programmable delay line or a matched pair of oscillators. These are difficult to implement with low jitter (lower jitter than typical gigahertz signals) in the presence of typical circuit manufacturing-process variations and circuit noise. Some more-recent techniques demodulate a signal to a lower frequency to permit easier jitter measurement. U.S. Patent Application No. US-2002/0176491, by Kleck et al, uses analog demodulation to convert a high frequency signal to a lower frequency signal, and then performs conventional jitter measurement on the low frequency signal. U.S. patent application US-2002/0136337, by Chatterjee et al, describes how a jittered clock is connected to an analog-to-digital converter (ADC) having many bits of resolution, and the ADC samples a known jitter-free analog sine wave to produce a jittered digital output for which analysis of the binary-encoded sine wave reveals the amount of jitter in the clock.
PCT Application No. WO 99/57842, by Brewer et al, and U.S. Patent Application US-2002/0118738, by Whitlock, describe a method in which a clock is generated at a predetermined frequency offset (difference) from a clock-under-test, and the phase of the clocks is compared by counting the number of clock cycles occurring between instants at which an edge of one clock coincides with an edge of the other clock, and the minimum and maximum counts are recorded. This technique is too simple for many applications—it only measures peak-to-peak jitter, which is usually too variable to be a reliable parameter for production testing, and requires too long a test time to obtain reliable results for low jitter systems. It is preferable to be able to measure the standard deviation of the jitter to enable an estimate of long term peak-to-peak jitter, and to measure the frequency content of jitter. Measuring high frequency (HF) jitter separately from low frequency (LF) jitter is important because many high speed data transmission standards specify the tolerable amount of jitter as a function of frequency. For example, the separation between HF and LF jitter is typically specified as the data rate frequency divided by 1667 (or 2500). This corner frequency will typically be programmed as the loop filter frequency for the measurement unit's golden PLL.
When measuring jitter, it is important that random jitter be measured separately from deterministic jitter. Typically, this is done by analyzing the jitter histogram to see what Gaussian distribution best-fits the left and right tail-off in the histogram. A typical technique for reducing electro-magnetic interference in gigahertz signals is to modulate the transmit clock frequency with a much lower frequency, for example 30 kHz. Thus, in addition to measuring the level of deterministic jitter, it is often important to measure the modulating waveform's shape.
In addition to testing transmitted jitter of a high-speed data transceiver, it is also necessary to test a receiver's jitter and ensure that the receiver is sampling its input data in the middle of the signal eye opening. This is typically done by a jitter tolerance test, in which a specific amount of jitter is added to the input data signal and the BER is verified to be better than some threshold. This test requires very precise edge placement and high frequencies which add significantly to the complexity and cost of a tester.
In summary, prior art jitter measurement techniques require a precision delay line or analog circuitry, or only measure peak-to-peak jitter, and test equipment is only able to measure the jitter on signals that it can access, and the access connection may increase the jitter.
It will be seen that there is a need for a simpler, lower cost technique that accurately measures jitter using circuitry that can tolerate manufacturing process variations and has minimal or no impact on the signal under test.