1. Field of the Invention
This invention relates to techniques and processes for the detection and correction of errors in digital information, and more particularly relates to techniques and processes for correcting errors in signals which employ the modified duobinary code.
2. Description of the Prior Art
Error detection techniques for binary and modified duobinary are well known. One technique for error detection for modified duobinary systems is disclosed in U.S. Pat. No. 3,461,426. More recently, the present inventor filed an application entitled "Error Detection For Modified Duobinary Signals," Ser. No. 742,168, dated Nov. 15, 1976. It is important to note that the error detection process provides an indication of an error after the error occurrence. Unfortunately, the time location of the error is unknown and, therefore, correction of such errors cannot be accomplished by simple error detectors. Rather than attempting to determine the time location of the error per se, prior-art techniques for error correction must often rely on determining the bit in a given sequence which is most likely to be in error. If an error is detected for that sequence, the bit most likely to be in error is altered.
Techniques for improving the integrity of digital information have employed parity check digits. One such technique is described in an article entitled, "Error Detecting and Error Correcting Codes," R. W. Hamming, Bell System Technical Journal, Vol. 29, pp. 147-160, April 1950. In this technique, Hamming devised a code that corrects all single errors. The code consists of adding k suitably chosen check digits to the m message digits. If another digit is added, double errors can be detected as well as single errors corrected. A different code is described in an article entitled, "Coding For Constant-Data-Rate Systems -- Part I, A New Error-Correcting Code," by R. A. Silverman and M. Balser, IRE Proceedings, pp. 1428-1435, September 1954. This latter article describes the Wagner code in which a transmitted word consists of a sequence of m message digits and an additional digit used as a parity check. As each of the perturbed digits y arrives at the receiver, the a posteriori probabilities p (x.sub.1 /Y) and p ( x.sub.2 /Y) are calculated. Each digit of the received sequence is tentatively identified as x.sub.1 or x.sub.2, depending on whether p (x.sub.1 /Y) or p (x.sub.2 /Y) is larger, and the values of the a posteriori probabilities are stored in a memory for the duration of a word. The sequence thus obtained is checked for parity. If the parity is correct, the word is printed as received. If the parity check fails, the digit for which the difference .DELTA.p between a posteriori probabilities is the smallest is considered the digit most in doubt and the word is printed with this digit altered. The receiver then clears the stored values of the probability differences from the memory and proceeds to the next word. Thus, the Wagner code may be characterized as one which has a high probability for correction of single errors. Multiple errors would normally always be printed incorrectly.
With respect to the Wagner method of error correction as described in the above-noted September 1954 article, certain disadvantages result from its use. First, the binary data bit stream is divided into words each consisting of a sequence of message digits m and a single redundant digit. Thus, (m + 1) digits are sent and the redundancy is 1/m + 1 or for m = 5 a redundancy of about 20%. Further, for the Wagner method and for any method which requires the use of redundant digits, each bit period, i.e., time slot, must be reduced initially if it is desired to retain the same bit rate following correction. Once the redundant bit has been employed for its function, that bit is removed. Despite these disadvantages, use of the technique, indeed, results in a dramatic improvement in error rate as is shown in the above-noted September 1954 paper.
More recently, the theory of error correction was further developed in a paper entitled, "Maximum-Likelihood Sequence Estimation of Digital Sequences In The Presence Of Intersymbol Interference", by G. David Forney, Jr., pp. 363-378, IEEE Transactions On Information Theory, May 1972. The sequence estimator was for use with a digital pulse-amplitude-modulated sequence in the presence of finite intersymbol interference and white Gaussian noise. The structure includes a sample linear filter called a whitened matched filter, and a recursive nonlinear processor called the Viterbi algorithm. This structure is a maximum-likelihood estimator of the entire transmitted sequence.
Intersymbol interference is normally considered to be a primary impediment to reliable high rate digital transmission over high signal-to-noise ratio narrow-band channels such as voice-grade telephone circuits. Intersymbol interference is also introduced deliberately for the purpose of spectral shaping in certain modulation schemes for narrow-band channels called duobinary, partial-response, and the like. In his paper, Forney presents a simplified, but effective, optimum algorithm suitable for some partial-response schemes. In particular, beginning at page 373, under the heading "A Practical Algorithm", the discussion is directed to an algorithm suitable for use with the class of partial-response schemes defined by f(D) = 1 .+-. D.sup.n, with f(D) = 1 - D. The error-correction algorithm block diagram is shown in his FIG. 9, and the flow-chart for error correction with partial-response is shown in his FIG. 10. In the technique illustrated, the method for determining whether the tentative decision sequences are allowable is determined by passing the sequence through an inverse linear filter (shown in his FIG. 8 of the article) with an impulse response 1/f(D) to see whether an allowable input sequence comes out. The filter used includes a feedback network such as is illustrated in his FIG. 8 of the article.
Whenever an error is made, it is noted that the feedback network causes the error to continue to propagate in the circuit which affects all subsequent outputs. In each case, the output will be one unit higher than the corresponding input. Localization of the error in a finite time span requires information about the reliability of each of the tentative decisions previously made. For any reasonable noise distribution, the tentative decision most likely to be in error is that for which the error differential, i.e., the difference between the received amplitude and the standard amplitude for that level, has the largest magnitude with the appropriate polarity. The tentative decision for that location is considered to be in error and the bit is altered.
For a 3-level modified duobinary system, it is neither necessary nor desirable to use code words or blocks such as are used for the Wagner or Forney systems. In the instant invention, block length in a way is a variable and is essentially defined as the interval established by two successive extreme level bits. Also of importance is the fact that no redundant parity digit is necessary for use in detecting single errors. Thus, no redundancy is introduced at the transmitting end. In the instant invention, because of specific correlative patterns in the code, the polarity of error can also be determined, i.e., whether or not positive or negative. Thus, the process of identifying error location may be substantially reduced, on the average by a factor of two, as compared to Wagner's method.
As with the prior-art techniques discussed hereinabove, the noise impairment in the transmission medium should be such that a reasonable signal-to-noise ratio is obtained for most effective performance of the error corrector. By reasonable is meant a signal-to-noise ratio such that the line error rate is no worse than 10.sup.-3. For such a line error rate in the presence of, for example, Gaussian noise, most errors occur only between adjacent signal levels. The probability of an error occurring between nonadjacent levels, such as between the top and bottom levels, is negligible in that it is in the order of say 10.sup.-25. Such conditions most often prevail in most telecommunication transmission facilities. Thus, error occurrences not between adjacent levels can be desregarded for all practical purposes and the assumption of errors occurring between adjacent levels is valid for the overwhelming majority of existing transmission systems. It is also important that the majority of errors are single errors, since the technique which is based on the correction of the bit having the greatest error differential, and, thus, the most likelihood of error, can only be directed to one single bit within an error correction interval. The above conditions are desirable for optimum performance of the error corrector, but are not an absolute requirement. These conditions generally obtain in present day telecommunication transmission facilities.