As in all transmission links, distortion of the transmitted signal is an inevitable consequence of transmission over an imperfect transmission medium and its associated circuitry and other terminating components. In electrical transmission systems, it has long been the case that various forms of compensation, or equalisation, have to be performed on the transmitted signal. In the case of post transmission equalisation, equalisation is carried out at the receiver or at a number of intermediate repeater stations. Alternatively, in the case of pre-equalisation, the signal is pre-distorted at the transmitter in the opposite sense from that encountered over the transmission link. To a large extent, the same philosophy has been carried over into the field of optical transmission. However, the types of distortion met by optical signals are different from their electric counterparts because of the nature of the transmission signal and the medium through which it is carried.
The definitive measure of signal quality in a network is normally the bit error ratio (BER). Where a customer requires a BER in the region of 1×10−12, however, verification of the BER by direct measurement can take an unacceptable time period, even with a high speed network having a data rate of 1×1010 bits per second (bps). The traditional way of detecting errors is to establish a ones/zeroes decision threshold in the centre of the “eye diagram” (‘eye’) of a transmission link and monitor the data until an error is detected. The period within which an error is detected then depends on the BER and the signal bit rate. In extreme cases, it could take centuries before an error is detected! This technique can be improved by moving the decision threshold up or down within the eye into an area where the bit error ratio is higher and therefore easier to measure within a given time period. By evaluating the measured BER at several different thresholds, an assessment of how quickly the error ratio rolls off as the threshold is moved from the outer edges of the eye towards the centre can be made, for the one and zero levels separately.
FIG. 1 represents a plot of the BER against position of the decision threshold between data zero and data one. The figure shows that the roll off for ones and zeroes are not necessarily the same. At the extremities of the plot, errors can be detected but as the threshold is moved towards the centre of the eye, a position is reached where errors cannot be detected in an acceptable time. At this point, measurements are stopped, marked by the abrupt ends of the solid lines. The absolute minimum BER will occur at a decision threshold where the trends in error ratio on the one and zero levels produce the same BER value. So, by extrapolating the trend lines from the measured data to a point where they cross, as indicated by the dotted lines in FIG. 1, it is possible to estimate the actual minimum BER (min BER) without measuring it directly.
An alternative way of presenting the same information is shown in FIG. 2, where the Q value is plotted against the threshold, using the standard conversion equation known in the art:Q=21/2erfc−1(4×BER)
where erfc is the complementary error function, as know in the art.
The curved lines of FIG. 1 become straight in FIG. 2, as a result of this conversion, and it is therefore easier to extrapolate to estimate the error ratios rather than having to fit to curves. In addition, the use of Q values, in combination with the straight lines, makes it more intuitive rather than an exercise on line fitting, as is the case for FIG. 1.
One of the key requirements of the network is to assess what is happening to the signal, for example, whether factors such as the levels of noise, distortion, power etc are within expected limits. The above technique is powerful since it enables the network operator to predict error ratios in a context where actual measurement periods would be completely unacceptable.