In the field of high-frequency (e.g., 1 to 40 gigabits/second (Gb/s)) telecommunications and data communications, a signal that is transmitted from one location to another may become degraded due to a number of factors. Such factors are generally referred to as signal impairments. Two types of signal impairments are jitter and noise. Jitter and noise may be caused by various types of sources, such as electromagnetic interference, crosstalk, data-dependent effects, random sources, and so forth.
In general, jitter may be identified on the horizontal axis of an oscilloscope (typically measured in units of time), while noise may be identified on the vertical axis of an oscilloscope (typically measured in units of voltage). In slightly more detail, the term jitter refers to the horizontal displacement from an ideal position of various aspects of pulses of a signal or waveform, such as, for example, the displacement of various aspects of pulses of a signal or waveform within the time domain, phase timing, or the width of the pulses themselves. The term noise refers to the vertical displacement of various aspects of pulses of a signal or waveform, such as for example amplitude error in the signal or other vertical noise effects.
Jitter and noise may be “decomposed” (e.g., separated) into various components in order to aid in the analysis of the total impairment of a communications link or an associated system (e.g., transmitter, receiver, transmitter and receiver pair, electronic device or component, etc.), as well as to extrapolate or predict impairments that are typically associated with events of low probability. Conventional approaches for decomposing jitter include separating deterministic jitter (DJ) from random jitter (RJ), extrapolating lower probability events by “reassembling” or convolving the jitter components to analyze total jitter at a specific bit error rate (BER), sometimes referred to as TJ@BER. Similar methods of decomposition can be applied to noise as well. Complete two-dimensional probability waveforms or eye diagrams may be developed by combining those two orthogonal distributions.
FIG. 1 illustrates the decomposition of total jitter. As shown, deterministic jitter (DJ) 1 may be comprise: (1) periodic jitter (PJ) 1a, which may include periodic variations of signal edge positions over time; (2) data-dependent jitter (DDJ) 1b, which may be dependent on the bit pattern being transmitted within a given signal, including inter-symbol interference (ISI); and (3) duty cycle distortion (DCD) 1c, which may be dependent on transitions between symbols in a given data pattern. While deterministic jitter 1 may be completely characterized, the remaining component of total jitter 3, referred to as random jitter 2, can only be described by its statistical properties, e.g. a distribution. This is sufficient, however, to perform an accurate analysis of the impairments associated with a given signal.
FIG. 2 is an illustration of the so-called “spectral” approach to the analysis of jitter. A jitter spectrum is displayed, for example, using a logarithmic vertical scale measured in decibels (dB) and a horizontal scale showing jitter modulation frequency in gigahertz (GHz). The spectrum can be seen to contain a number of prominent spikes, some appearing at regular frequency intervals and others at apparent random locations. These spikes correspond to deterministic jitter. In a known spectral approach, the remaining spectral “floor” 200 is assumed to be composed entirely of random jitter with a Gaussian probability distribution.
One limitation of the spectral approach or methodology is that it appears to require a repeating pattern, at least to some degree. Another limitation is that the presumption that the random jitter in the “floor” 200 is best represented by a Gaussian probability distribution is not always valid. For example, jitter associated with crosstalk may be non-periodic and uncorrelated with a given data pattern, while possessing a bounded probability distribution. The consequence of mistaking bounded jitter for unbounded (i.e., random) jitter is particularly severe, especially when jitter measurements are extrapolated and used to measure the performance of a communication link or device at low bit rates. Said another way, the spectral approach may fail to isolate random jitter from other forms of uncorrelated jitter when, for example, crosstalk is present. As is known in the art, crosstalk occurs between high-speed channel links, and is mostly characterized as bounded noise. In its most general form, it is uncorrelated to the data streams within the links (i.e., the links being analyzed). When at least some of the crosstalk spectral lines broaden and flatten they may become undistinguishable from the jitter spectral floor 200. This increase in the noise and jitter floors (such as floor 200 in FIG. 2) makes the components of crosstalk indistinguishable from residual, random elements.
In sum, while the spectral approach may identify and remove periodic jitter components, non-periodic, uncorrelated jitter components may remain. One result is that random jitter measurements may be severely inaccurate (i.e., overestimated) which, in turn, results in inaccurate (i.e., overly pessimistic) estimates of TJ@BER.
Similar problem exist in other methods as well (i.e., other than the spectral method). For example, a different approach, referred to as a “correlation method”, is directed at the separation of jitter components of a data stream even when an associated data pattern is unknown or non-repeating. In particular, the correlation approach measures Time Interval Errors (TIE) of the data stream, estimating ISI and DCD associated with the data pattern and then subtracting out the ISI and DCD components from the measured TIE. A spectral approach may then be used to separate the remaining TIE into periodic and random components. In contrast to the “repeating pattern” requirement illustrated by the spectral approach, however, the correlation method may be useful even when a waveform carries a non-repeating data pattern. Further, the correlation method may be combined with the spectral approach such that the correlation method identifies and removes jitter associated with data dependency, after which the spectral method identifies the jitter associated with other deterministic processes. Unfortunately, however, random (but bound) jitter that is mixed with unbound Gaussian jitter cannot be separately identified.
U.S. Pat. No. 7,899,638 entitled “Estimating Bit Error Rate Performance of Signals”, issued to M. Miller on Mar. 1, 2011 (referred to as Miller) appears to describe an estimated cumulative distribution function (CDF) method, where an estimate of the TIE's probability density function (PDF) may be obtained from a TIE histogram. Because it is known that Gaussian, random jitter is only dominant in the unbounded left and right extremes of Miller's PDF, the standard deviation of this random jitter may be estimated by varying the standard deviation of a Gaussian jitter model and comparing the results to the measured distribution.
In one specific implementation, a histogram (i.e., sampling of the PDF) may be mathematically integrated to form an estimated CDF, which is then plotted using the so-called Q-scale. As is known, the Q-scale is a mathematical transformation of the CDF's probability axis, such that a Gaussian distribution may be plotted as a straight line with a slope inversely related to its standard deviation. Once the estimated CDF is plotted on the Q-scale, straight lines may be fit to the left and right asymptotic regions (according to a predefined minimization criteria), where the slope of the lines may reveal the standard deviation of the Gaussian distributions. This process is illustrated by FIGS. 3(A) and (B), which show two simulated data sets for which this process may be applied. In each case, the darker line is an estimated CDF derived from a histogram of measured TIE values. In FIG. 3(A), the data set is from a random process with a (solely) Gaussian distribution. Once plotted on the Q-scale, this distribution approximates a straight line, having a slope equal to 1/σ, where σ is the standard deviation of the Gaussian distribution. In FIG. 3(B), the data set includes multiple uncorrelated bounded distributions, as well as at least one Gaussian distribution. The two dotted lines 4, 5 may be used to indicate that linear fits may be made to the asymptotic extremes of the CDF, as a means of estimating the standard deviation, σ, of the Gaussian model parameter for this data set.
However, there are several limitations to Miller's “estimated CDF” method. For example, the method appears to only provide a way to model Gaussian and aggregated deterministic components; no modeling parameters are presented to model or estimate individual bounded jitter components that may be present. Further, the presence of multiple bounded components (which typically make up a majority of the jitter being observed, especially when crosstalk is present) may bias attempts to accurately measure the standard deviation, σ, of the relatively small Gaussian components. For example, multiple uncorrelated bound distributions may combine into a distribution that has extremes resembling a Gaussian distribution (i.e., see the well known “Central Limit Theorem”). The more (uncorrelated, bound signal impairment) components are present, the closer the resemblance. This makes the separation or distinctions error-prone.
Heretofore, the limitations discussed above have prevented the total jitter of a given signal or waveform to be measured accurately. Lacking accurate estimates, it is difficult to diagnose the source of jitter much less design a communications system that minimizes or prevents jitter from interfering with the quality and integrity of signals within such a system.
One approach to addressing these limitations is described in U.S. application Ser. No. 13/081,369 (referred to as the '369 application), mentioned above and assigned to the same assignee as the present application. As described in the '369 application, jitter is decomposed into correlated and uncorrelated components, and the uncorrelated component is further decomposed into bounded, uncorrelated jitter and random (i.e., unbound) jitter, for example, by integrating a probability density function (PDF) of the residual jitter and analyzing the resulting cumulative distribution function (CDF) curve in Q-space.
While this approach overcomes some of the limitations discussed above, it does not address the circumstance where unbound (random) components and some bound, uncorrelated components of signal impairments may co-exist. More particularly, because the unbound component is very difficult to separate from the bound component, it cannot be easily replaced by a desired, unbound component's PDF (e.g., an ideal or near-ideal PDF that includes very low probabilities, far from the mean). Collectively, the combination of the unbound (random) component and bound, non-periodic uncorrelated component of signal impairment(s) may be referred to herein as “residual jitter”.