In modern communications, a great deal of data is transmitted in digital form but, due to signal degradation over the transmission path, there are occasional errors in the signal regenerated at the receiver. A number of techniques have been devised for detecting and correcting these errors, the most obvious being the detection of errors at the receiver and the request of a retransmission. In many applications, however, retransmission is impractical and, therefore, error correcting codes have been devised in which the errors can be detected and corrected by decoding techniques at the receiver without the benefit of retransmission.
Maximum-likelihood decoding is typically used for error correction, i.e., the information sequence is selected that corresponds to the sequence with the best agreement to the detected sequence of received code symbols.
Above one such maximum-likelihood decoding technique is known as majority logic decoding in which modulo-2 sums of hard-detected code symbols are used for parity checks on each decoded information bit. An encoder at the transmitter generates a plurality of parity symbols and transmits these together with the information symbols. At the receiver, a decoder receives the transmitted information symbols as well as the transmitted parity symbols and regenerates a set of parity symbols from the received information bits. The received and regenerated parity symbols are then compared and, if enough of the received and regenerated parity symbols match, the information bit is considered correct. If too many of the received and regenerated parity symbols disagree, the bit will be considered an error and will be "corrected". In the case of a binary digital system in which each bit is either a "1" or a "0", the correction will be accomplished by merely adding a "1" to the detected information bit in a modulo-2 adder. Such a technique is well known and is described in Threshold Decoding, by James L. Massey, M.I.T. Press, Cambridge, Massachusetts, 1963.
FIG. 1 depicts a Tapped Delay Line (TDL) encoder of constraint length K=7 that produces a self-orthogonal systematic convolutional code of rate 0.5 with J=4 parity checks on each information symbol. The encoder includes a seven-stage shift register 10 having tap connections 12 to four of the stages. The tap connections are combined in an adder 14, the output of which is a parity symbol U.sub.n. For each new symbol X.sub.n received in the first stage of shift register 10, the information symbol X.sub.n and parity symbol U.sub.n are transmitted, thus yielding a code rate of 1/2. The tap connections are defined by .alpha..sub.o =1, .alpha..sub.1 =1, .alpha..sub.2 =0, .alpha..sub.3 =0, .alpha..sub.4 =1, .alpha..sub.5 =0, .alpha..sub.6 =1. Note that any information bit will remain in the TDL for only K bit periods. Therefore, J parity equations on each information bit will be produced within K successive parity symbols. For the nth information bit, the appropriate parity equations are: EQU U.sub.n =X.sub.n +X.sub.n-1 +X.sub.n-4 +X.sub.n-6 EQU U.sub.n+1 =X.sub.n+1 +X.sub.n +X.sub.n-3 +X.sub.n-5 EQU U.sub.n+2 =X.sub.n+2 +X.sub.n+1 +X.sub.n-2 +X.sub.n-4 EQU U.sub.n+3 =X.sub.n+3 +X.sub.n+2 +X.sub.n-1 +X.sub.n-3 EQU U.sub.n+4 =X.sub.n+4 +X.sub.n+3 +X.sub.n +X.sub.n-2 EQU U.sub.n+5 =X.sub.n+5 +X.sub.n+4 +X.sub.n+1 +X.sub.n-1 EQU U.sub.n+6 =X.sub.n+6 +X.sub.n+5 +X.sub.n+2 +X.sub.n
It is seen that X.sub.n is contained in J=4 parity equations; U.sub.n, U.sub.n+1, U.sub.n+4, and U.sub.n+6. No other term is included in more than one of these four equations. Thus, the J=4 equations are orthogonal on X.sub.n. Because of the generality of the equations (X.sub.n is the nth term), there are J=4 orthogonal parity equations on each information bit. These are the equations generated when the information bit passes the tap connections in the encoder. Note that all K-1=6 terms that immediately precede and follow X.sub.n are contained in the J=4 parity equations on X.sub.n. Thus, the code is "perfect" in the sense that all possible combinations of information symbols are utilized in the parity checks. Consequently, no other self-orthogonal code with J=4 parity checks can be constructed more efficiently such as to require less encoder stages. Note that 4 equations with 4 terms each necessitate 16 total terms, X.sub.n four times and 12 other terms once each. If these 12 terms are not the six adjacent to X.sub.n on each side, a longer constraint length than K=7 will be required. It may be possible to obtain J=4 orthogonal equations from a shorter constraint length if the code is not self-orthogonal. In such a case, orthogonal equations may be obtained from linear combinations of the parity symbols, whereas self-orthogonality requires that the orthogonal equations are obtained directly without any manipulation or combining of parity symbols.
An additive white Gaussian channel is assumed in which the received signal is the code symbol corrupted by noise. Let .delta..sub.n and .epsilon..sub.n denote independent Gaussian samples with zero means and equal variances. Then, the received code symbols plus noise are given by: EQU x.sub.n =X.sub.n +.delta..sub.n EQU u.sub.n =U.sub.n +.epsilon..sub.n
At the detector, x.sub.n and u.sub.n are quantized into digital estimates X'.sub.n and U'.sub.n of the code symbols X.sub.n and U.sub.n, respectively. Then, the detected information symbols X'.sub.n are used as inputs to a replica of the encoder. The output of this device is a regenerated estimate U".sub.n of the parity symbol U.sub.n. Also, the detected parity symbol U'.sub.n is an estimate of U.sub.n. Thus, the modulo-2 sum of U".sub.n and U'.sub.n is taken to determine if these two estimates agree. The output of this mod-2 addition is the parity check symbol A.sub.n. The sequence of parity checks is termed the syndrome. The syndrome is used in algebraic decoding techniques to estimate the error sequence {e.sub.n }. If the error sequence is within the error-correcting capability of the code, the error estimates will be correct. Each error estimate e.sub.n is added modulo-2 to the detection estimate X'.sub.n to obtain the decoded value of X.sub.n. The decoder estimate of X.sub.n is termed X.sub.n. Thus, with modulo-2 addition, EQU X'.sub.n =X.sub.n +e.sub.n EQU X.sub.n =X'.sub.n +e.sub.n
Note that there are J=4 parity checks on X.sub.n. These are given by: ##EQU1## Note that X.sub.n cannot be decoded until U'.sub.n+6 and X'.sub.n+6 are obtained. Thus, there is a decoding delay of K-1=6 information bit intervals.
Threshold decoding is a generic term for a method of performing weighted majority decisions, in which the J+1 estimates on X.sub.n are assigned weighting coefficients w.sub.j, j=0 to J-1. Thus, w.sub.o is the weight given to the detection estimate E.sub.no. Define by .lambda..sub.1 the sum of the weights for the other estimates E.sub.nj that disagree with the detection estimate. Also define as .lambda..sub.o the sum of the weighting factors for the other E.sub.nj estimates that are in agreement with E.sub.no. Thus, ##EQU2##
Weighted majority decoding is accomplished by selecting the value of "1" or "0" for X.sub.n in accordance with the estimate that has the greatest total weight. The total weighting in favor of the detection estimate E.sub.no is w.sub.o +.lambda..sub.o. Also, the total weighting for disagreements with the detection estimate is .lambda..sub.1. Then, the decoding estimate is given by: ##EQU3## An agreement of E.sub.nj with E.sub.no means that the jth parity check is good, or C.sub.nj =0. Similarly, disagreement of E.sub.nj with the detection estimate implies that the parity check fails, or C.sub.nj =1. Thus, threshold decoding is usually based on the parity checks, which are defined by: EQU C.sub.nj =E.sub.no +E.sub.nj, j=1 to J
The value of the detection estimate is changed only when the sum .lambda..sub.1 of the weights w.sub.j for which C.sub.nj =1 exceeds the weight w.sub.o of the detection estimate plus the sum .lambda..sub.o of the other w.sub.j weights for which C.sub.nj =0. Note that: ##EQU4## Now define the decoding threshold by: ##EQU5## Then, for threshold decoding, the detection estimate of E.sub.no =X'.sub.n is corrected only if .lambda..sub.1 &gt;.GAMMA., that is, only when ##EQU6##
Majority-logic decoding usually refers to the special case described in which the weighting coefficients w.sub.j are all assigned unity values. Such a decoding technique is disclosed in U.S. Pat. No. 3,622,338 to Cain. Note that in such a special case the threshold for equal weighting is one-half of the total number of independent estimates on each information symbol. ##EQU7## Also, the total weight .lambda..sub.1 for the disagreeing estimates is equal to the number N.sub.D of disagreements for the C.sub.nj estimates relative to the detection estimate X'.sub.n. ##EQU8## Also, let N.sub.A represent the number of E.sub.nj estimates on X.sub.n that agree with the detection estimates. Then the total of all the weights is J+1. ##EQU9## Thus, in majority logic decoding, the detection estimate is changed or corrected only if the number N.sub.D of disagreements exceeds one-half of the total number of estimates. Equivalently, over one-half of the parity checks on X.sub.n must fail (C.sub.nj =1) if the detection estimate is to be changed. Hence, threshold decoding can be used as the generic term for majority decoding that may or may not include unequal weighting of the J+1 estimates of each received information bit.
FIG. 2 illustrates a typical majority-logic decoder for detected code symbols. The information symbols X.sub.n and parity symbols U.sub.n having been transmitted over an additive white gaussian channel, are corrupted by noise and are received at a detector 20 as received code symbols x.sub.n and u.sub.n, respectively. The detected information symbols X'.sub.n are provided to an encoder replica consisting of a shift register 22, tap connections 24 and adder 26 similar in design and operation to their counterparts in the encoder. At each stage of a given information symbol X'.sub.n in the encoder replica, a recreated parity symbol U" is generated at the output of the adder 26. These recreated parity symbols are combined with the detected parity symbols U' in a modulo-2 adder 28, the output of which is provided to a syndrome register 30. If all of the recreated parity symbols agree with the detected parity symbols, their modulo-2 sums will be 0, and the contents of the syndrome register 30 will be all zeros. The majority decision circuitry 32 will examine the four orthogonal parity checks on X.sub.n, determine that all parity symbols are in agreement, and provide a "0" output signal e.sub.n to the correction modulo-2 adder 34. Under this condition, the detection estimate X'.sub.n will be considered correct and will not be changed by the adder 34.
On the other hand, if an excessive number of the recreated and detected parity symbols are in disagreement, the majority decision circuitry 32 will provide a "one" output signal to the adder 34, thus resulting in a change in the detection estimate X'.sub.n.
Because of the orthogonality condition, any symbol other than X'.sub.n can affect only one of the parity checks on X.sub.n. If X.sub.n is correct, up to J/2 errors in these symbols will cause no more than J/2 parity checks to fail (have logical "1" value). If X'.sub.n is in error and at most (J/2)-1 other symbols are in error in the J parity checks on X.sub.n, then at least (J/2)+1 of these parity checks will fail. Therefore, up to J/2 errors can be corrected on a majority basis. If J/2 or less parity checks fail (C-1 for J/2 or less checks), the error estimate is e=0. Thus, the detection estimate is not altered by decoding. If c=1 for more than J/2 parity checks, then e.sub.n =1, and the decoder changes the detection estimate to its complement.
Of the J parity checks that are used to decode a given information symbol, J-1 of these checks will be used later for decoding other information symbols. When the information bit is determined to be in error it is, of course, corrected. However, this assumed symbol error would make the J parity checks to be questionable. Thus, the J-1 parity checks that are to be used later would be unreliable. Therefore, the decoding performance can be improved if these J-1 parities are complemented to improve their reliability for future use. Feedback of the error estimate e.sub.n can thus be used to correct the parity checks on X.sub.n that are to be utilized for majority decisions on other information symbols.
Such a feedback scheme is depicted in FIG. 3 for "syndrome resetting". The error correction signal is fed back to modulo-2 adders 36, 38 and 40 to complement the contents of those stages of the syndrome register which were used in the majority decision, thus rendering these parity checks somewhat more reliable.
Note that each of the J=4 parity equations on X.sub.n may be written as the modulo-2 sum of X.sub.n plus other terms. With E.sub.nj used to denote the terms other than X'.sub.n in the jth parity check on X.sub.n, the J=4 parity checks on X.sub.n may be written as: ##EQU10## Thus, in terms of mod-2 addition, ##EQU11##
Such a technique of using decoder decisions for syndrome resetting is known as Decision FeedBack (DFB). An alternative implementation is Partial FeedBack (PFB), in which a parity term is updated by feedback only if it originally had a "1" value.
In the absence of errors in detection, the syndrome sequence of parity checks {A.sub.n } will contain all zeros. Therefore, any E.sub.nj term, which includes all terms of the corresponding parity check C.sub.nj except X'.sub.n, will be equal to X.sub.n in the absence of detection errors. Because of the orthogonality of the parity equations, each E.sub.nj can be considered as an independent estimate of X.sub.n. Also, the detected value of X.sub.n can be used as another independent estimate, termed E.sub.no. Hence, there will be J+1=5 independent estimates of each information symbol, as given by: ##EQU12##
In general, the decoded or corrected value X.sub.n of X.sub.n will be some function of the J+1=5 independent estimates of its value. EQU X.sub.n =G[E.sub.n0, E.sub.n1, E.sub.n2, E.sub.n3, E.sub.n4 ]
In the usual implementation of the majority decoding, the decoder accepts only hard-detected code symbols and the d estimates are given equal weighting. Although this may be the simplest method of majority decoding, it is also the least accurate. Decoding performance can be substantially improved if each estimate is weighted in accordance with its reliability, as determined from the soft-detection levels of the code symbols involved in the estimate. In "hard" detection, only the presence or absence of a "1" is determined, but in "soft" detection, the level of the received voltage is monitored, with the reliability of that signal being determined by how close the detected voltage level is to a "1" or a "0".
"A posteriori probabilistic" (APP) decoding is threshold decoding with the weighting coefficients chosen to optimize the decoding performance on a probabilistic basis. Let P.sub.j denote the probability that the jth estimate of X.sub.n is in error. Also, define Q.sub.j =1-P.sub.j as the probability of correct estimation. As shown by Massey, the weighting coefficients for APP decoding are given by: EQU w.sub.j =H log (Q.sub.j /P.sub.j)
where H is an arbitrary constant. Unless P.sub.j and Q.sub.j are time variant there appears to be no significant gain to be achieved by weighting in accordance with APP decoding. For a time-invariant case, for j.noteq.0, all P.sub.j values are equal for self-orthogonal codes, that have an equal number of terms in each parity check. Because P.sub.o .congruent.P.sub.j /J, w.sub.0 will be slightly higher than the other w.sub.j in APP decoding. When the received voltage levels are observed prior to detection, P.sub.j and Q.sub.j will be conditional probabilities that are time variant in accordance with the received levels of signal plus noise for the code symbols used to obtain the jth estimate of X.sub.n.
A rigorous determination of the optimum weighting coefficients for APP would necessitate processing of considerable complexity, and this optimum technique is difficult to implement for other than low-speed applications. It would be desirable to find a suboptimum weighting method yielding decoding results almost as good as APP weighting while reducing the complexity of the weighting algorithm.
A further technique is disclosed in U.S. Pat. No. 3,609,682, to Mitchell. The Mitchell patent describes a simple case of 3-level soft-detection for improving performance of majority decoding. The weighting of each estimate is either unity or zero, i.e., the estimate is considered completely reliable or completely unreliable. An estimate is not used when considered unreliable, and this zero reliability weighting is given when the received voltage level of any code symbol used to form the estimate falls within the "null" zone of the soft-detector. Thus, the Mitchell technique loses some coding gain in discarding estimates altogether.
A still further soft detection technique is disclosed in U.S. Pat. No. 3,805,236 to Battail, in which soft-detection is used for variable reliability weighting of all estimates of an information bit, which is decoded on a weighted majority basis. Two weightings are disclosed in Battail, (1) assigning a likelihood weighting of the least reliable term to each estimate, and (2) the addition of the logarithms of individual likelihood numbers for each term to obtain the weighting coefficient for each estimate. It would be desirable even with the Battail techniques to utilize less complex weighting hardware.