Coding techniques are of fundamental importance for the reliable transmission of information over physical channels. The redundancy introduced by an encoder at the transmitter allows the decoder at the receiver to recover the sequence of information bits with a probability of error significantly smaller than that achieved by transmission systems that do not resort to coding techniques. An example of a technology that employs coding to achieve reliable data transmission is given by the contemporary Single-Pair High-Speed Digital Subscriber Line (SHDSL), for transmission at data rates of up to 2.32 Mbit/s over standard telephone lines in customer service areas.
We usually distinguish two broad classes of coding techniques, each with several subclasses, one employing block codes and the other convolutional codes. All coding techniques add redundancy, in the form of additional bits, to the information bits that must be transmitted. Redundancy makes possible the recovery of information bits with high reliability. The effectiveness of a coding technique is expressed in terms of the coding gain, given by the difference between the signal-to-noise ratios, in dB, that are required to achieve a certain bit error probability for transmission without and with coding.
Convolutional codes are a subclass of the class of tree codes, so named because their code words can be conveniently represented as sequences of nodes in a tree. A convolutional code may be described in terms of a tree, trellis, or state diagram. Tree codes are of great interest because decoding algorithms have been found that are easy to implement, and can be applied to the entire class of tree codes, in contrast to decoding algorithms for block codes, each designed for a specific class of codes, as for example Reed-Solomon codes.
Convolutional codes are usually characterized by two parameters, the code rate RC and the constraint length v. The code rate is given by the ratio between the number of information bits k and the number of code bits n that are generated per encoder cycle. Therefore the code rate is given by RC=k/n. A convolutional encoder may be represented as a finite state machine that determines the output bits and the next state depending on the input bits and the current state. The current state is identified by the bits input to the encoder during the previous v cycles. Therefore the constraint length v is proportional to the number of binary storage elements of the encoder, and the number of states of the encoder is 2kv. The encoder output bits generated per encoder cycle may be transmitted using a binary modulation scheme, for example binary phase shift keying (BPSK), or mapped to a symbol of a multilevel constellation prior to transmission by a multilevel modulation scheme, for example pulse amplitude modulation (PAM) or quadrature amplitude modulation (QAM). The time interval required for the transmission of a symbol is usually denoted as modulation interval.
At the receiving side, the sequence of transmitted code symbols, or equivalently the sequence of encoder states, is detected using a decoding algorithm. The most widely used decoding algorithms are the Fano algorithm for sequential decoding, the Viterbi algorithm for maximum likelihood decoding, and the forward-backward algorithm, also known as the BCJR algorithm, for maximum a posteriori (MAP) probability decoding. The information bits are then recovered from the detected sequence.
In U.S. Pat. No. 3,457,562 to Robert M. Fano, there is disclosed a basic sequential decoder implementing the Fano algorithm. Sequential decoding represents an attractive technique for the decoding of convolutional codes or trellis codes in case the number of states of the encoder is large. In the Fano algorithm, the sequential decoder explores one path of a decoder tree at a time. The decoder tree is developed using the knowledge of the encoder finite state machine. In particular, each branch is labeled with the code bits or code symbol that would be transmitted in case of a transition of the encoder finite state machine from the state corresponding to the node at which the branch originates to the state corresponding to the node at which the branch ends. Three types of moves are allowed: forward, lateral, and backward. On a forward move, the decoder goes one branch to the right in the decoder tree from the previously hypothesized node. On a lateral move, the decoder goes from a node on the tree to another node differing only in the last branch. The ordering among the nodes is arbitrary, and a lateral move takes place to the next node in order after the current one. A backward move is a move one branch to the left on the tree. To determine which move needs to be made after reaching a certain node, it is necessary to compute the metric Γl of the current node being hypothesized, and consider the value of the metric Γl−1 of the node one branch to the left of the current node, as well as the current value of a threshold Tl, which can assume values that are multiple of a given constant Δ. The metric of a node is obtained by summing the metrics of the branches on the path leading to that node. The branch metric represents the distance between the noisy received signal and the code symbol with which a branch is labeled.
The Viterbi algorithm was proposed by Andrew J. Viterbi, as described in “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm” published in the IEEE Transactions on Information Theory, volume IT-13, April 1967, pp. 260–269. The Viterbi algorithm is a maximum-likelihood decoding procedure that explores in parallel every possible code sequence. Metrics are given by the Euclidean distances between the received signal and the code symbol sequences.
A solution to the general problem of estimating the a posteriori probabilities of the states and transitions of a finite state machine observed through a noisy channel is obtained by the forward-backward algorithm proposed by L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv in “Optimal decoding of linear codes for minimizing symbol error rate,” published in the IEEE Transactions on Information Theory, vol. IT-20, pp. 284–287, March 1974. The forward-backward algorithm can thus be applied for MAP decoding of linear block and convolutional codes.
The three aforementioned documents are hereby incorporated by reference.
On one hand, a drawback of the maximum-likelihood and MAP decoders is that the complexity to implement such decoders grows exponentially with the number of encoder states. Various solutions have appeared in the literature to reduce the complexity of the decoder trellis diagram, see, e.g., the reduced-state forward-backward algorithms described by G. Colavolpe, G. Ferrari, and R. Raheli in “Reduced-State BCJR-Type Algorithms,” published in the IEEE Journal on Selected Areas in Communications, vol. 19, pp. 848–859, May 2001. In particular, an approximation of the a posteriori probabilities of the states and transitions in the decoder trellis diagram can be obtained at the end of the forward recursion of a reduced-state BCJR-type algorithm, with a substantial reduction of complexity, but such an approach would lead to a non negligible loss in performance. On the other hand, sequential decoders present the drawback that the number of computations C required for the decoding process to advance by one branch in the decoder tree is a random variable having a Pareto distribution; in other words, the probability that the number of computations C is larger than N is given by the following equation:P[C>N]=AN−p,  (1)where A and p are constants that depend on the channel characteristics and on the specific code and the specific version of the sequential decoding algorithm used.
As the number of computations per decoded symbol is not deterministic, real-time applications of sequential decoders require buffering of the received signal samples. Furthermore, as practical sequential decoders can perform only a finite number of operations in a given time interval, it is necessary to make provisions to avoid buffer overflow, which would have as a consequence incomplete decoding (erasures). If the buffer nears saturation, to avoid erasures it is necessary to reliably compute a state for restarting the sequential decoder, a procedure also known as resynchronization. In other words, resynchronization of the decoder must take place if the maximum number of operations that is allowed for decoding without incurring buffer saturation is exceeded.
Some solutions for resynchronization of the decoder are known, for example the buffer looking algorithm (BLA). In the BLA, the buffer is divided into L sections, each section having a size equal to Bj,j=1, . . . ,L. One conventional sequential decoder, generally named the primary decoder, and L−1 secondary decoders are used. The secondary decoders run fast algorithms, such as the so-called M-algorithm or variations of the Fano algorithm. For an in-depth description of the M-algorithm, someone skilled in the art may refer to the article “M-algorithm decoding of channel convolutional codes, ” by C. F. Lin and J. B. Anderson, published in the Conference Records of the Princeton Conf. Inform. Sci. Syst., Princeton, N.J., March 1986, pp. 362–365. In systems employing the BLA, the (L−1)-th secondary decoder, which is used to resynchronize the sequential decoder as the buffer nears saturation, is a hard-decision decoder. However, the BLA does not address satisfactorily the problem of possible errors in the resynchronization of the primary decoder. In fact, using a hard-decision decoder as a secondary decoder yields a low reliability of the recovered state. As a consequence, the sequential decoder will need with high probability a long sequence of received signal samples to resynchronize successfully, with the result of increasing the probability of repeated buffer saturation events and hence long erasures. Furthermore, sequential decoder resynchronization based on hard decisions is well suited only for the decoding of code sequences generated by encoders in systematic form, whereas this method cannot be applied if the encoders are in nonsystematic form. A definition of systematic and nonsystematic encoders is given in “Convolutional codes I: Algebraic structure,” by G. D. Forney, Jr., published in the IEEE Transactions on Information Theory, vol. IT-16, pp. 720–738, November 1970.
U.S. Pat. No. 5,710,785 to T. Yagi describes a sequential decoder having short synchronization recovery time. Convolutional code symbols are sequentially stored into a first buffer at a transmission rate and read therefrom into a decoder where the symbols are decoded at a rate higher than the transmission rate and stored into a second buffer. A controller determines the likelihood of each decoded symbol in accordance with a predetermined likelihood algorithm and causes the decoded symbols to be read out of the second buffer in a backward direction into the decoder when the determination indicates a low likelihood value. When the first buffer is overflowed, the controller causes symbols to be read out of the first buffer into the decoder starting with a symbol which is k symbols older than the most recently received symbol and causes the decoder to shift its symbol timing by one clock interval, where k is an integer ranging from zero to a predetermined number which is smaller than the maximum number of symbols that can be stored in the first buffer. The method by Yagi, however, does not address the problem of reliably determining an initial state for the sequential decoder to restart operations. If the state to which the decoder is resynchronized is not correct, several consecutive overflows of the first buffer may occur, thus determining long erasures.
Thus there is still a need for a reliable solution to solve the problem of resynchronization of sequential decoders. Other coding techniques based on parallel concatenated convolutional codes or on low density parity check codes lead to better system performance in terms of bit error probability, however at the expense of significantly higher computational complexity, see for example the turbo code decoder with controlled probability estimate feedback described in U.S. Pat. No. 6,223,319 to J. A. F. Ross, S. M. Hladik, N. A. Van Stralen, and J. B. Anderson. Therefore several transmission systems of practical interest, such as the aforementioned SHDSL system, still employ convolutional codes or trellis codes that allow the application of low-complexity decoding techniques, such as sequential decoding.