The disclosure relates generally to error correction techniques for data communications and more specifically to forward error correction techniques.
1.1 Forward Error Correction and Modern Codes
Forward error correction (FEC) (also known as channel coding) is an indispensable part of modern digital communication systems, especially wireless digital communication systems where the information-bearing signals undergo severe distortion during propagation through the wireless medium (channel). Forward error correction helps combat such deleterious effects by introducing structured redundancy in the transmitted signals.
When describing the operation of FEC, information messages are often modeled as K-bit words b=(b1, b2, . . . , bK), where each message bit bk can assume one of two values, ‘0’ or ‘1’. The vector b is called a binary message (data) block. A binary (N, K) FEC code (also known as a channel code) is a collection of 2K N-bit codewords where N>K such that one codeword corresponds to each possible message word. An binary FEC encoder (referred to herein simply as an encoder) for an (N,K) binary FEC code maps a K-bit message block b=(b1, b2, . . . bK) to an N-bit codeword c=(c1, c2, . . . cN) in a deterministic fashion. In a binary systematic code the message block appears explicitly as part of the corresponding codeword, e.g. c1=b1, c2=b2, . . . , cK=bK. In a systematic codeword, bits other than message bits are called parity bits. The number of parity bits in a systematic code is, therefore, P=N−K. The code rate of a (N,K) code is defined to be R=K/N=K/(K+P). Lower code rates indicate higher redundancy (more parity) as compared to higher code rates.
A binary FEC decoder performs the inverse operation by estimating the intended message word from the received word. The received word is denoted by λin=(λin,1, λin,2, . . . , λin,N), where λin,j is the received value for the jth codeword bit as input to the decoder. The received values summarize the effects of all operations performed starting from the output of the encoder on the transmit-side, to the input of the decoder on the receive-side, such as modulation, transmission through the physical channel, demodulation. In general the received values may or may not be binary (i.e. ‘0’ or ‘1’). If {λin,j} values are binary, the decoder is said to have hard-inputs, otherwise it is said to have soft-inputs. A common example of soft-inputs is the log-likelihood ratios for codeword bits which take values other than ‘0’ or ‘1’. The log-likelihood ratio (LLR) of a bit is the natural logarithm of the ratio of the empirical (computed) probability that the bit has the value ‘0’, to the empirical probability that the bit has the value ‘1’.
                              λ                                    i              ⁢                                                          ⁢              n                        ,            k                          =                  log          ⁢                                    p              ⁡                              (                                                      c                    k                                    =                  0                                )                                                    p              ⁡                              (                                                      c                    k                                    =                  1                                )                                                                        (                  1.1          ⁢          .1                )            
Other variants of soft-information input to the decoder, such as probability values, are equivalent to LLRs in the sense that one can be obtained from the other, for example, p(ck=0)=1/(1+ exp(−λin,k)). For this reason, without loss of generality, soft-information regarding a particular bit is assumed to be a log-likelihood-ratio.
Since the introduction of the (binary) turbo decoder which demonstrated unprecedented near-capacity performance, modern binary codes with related structure (low-density parity-check codes, serially-concatenated convolutional codes, repeat-accumulate codes and variants, turbo-product codes, etc.) has been a popular topic of research and development. Decoders for modern or turbo-like codes (operate by computing a posteriori LLRs or equivalent metrics for the message bits based on the input soft information λin. We denote this output soft-information as
                              λ          k                ≈                  log          ⁢                                          ⁢                                    p              ⁡                              (                                                      b                    k                                    =                                      0                    ❘                                          {                                                                        λ                                                                                    i                              ⁢                                                                                                                          ⁢                              n                                                        ,                            n                                                                          ,                                                  n                          =                          1                                                ,                        …                        ⁢                                                                                                  ,                        N                                            }                                                                      )                                                    p              ⁡                              (                                                      b                    k                                    =                                      1                    ❘                                          {                                                                        λ                                                                                    i                              ⁢                                                                                                                          ⁢                              n                                                        ,                            n                                                                          ,                                                  n                          =                          1                                                ,                        …                        ⁢                                                                                                  ,                        N                                            }                                                                      )                                                                        (                  1.1          ⁢          .2                )            where ≈ indicates the fact that the computation of the right-hand side of equation (1.1.2) may be inexact for a practical decoder. Bit decisions are delivered by looking at the sign of the output metrics of equation (1.1.2), therefore they are also called as bit decision-metrics. A positive value of a bit decision metric (as defined by equation (1.1.2)) yields a decision of ‘0’, whereas a negative value results in a bit decision ‘1’.
The number of bit errors in the decoded block is denoted by KE. The instantaneous bit-error-rate (BER) is defined to be the fraction of bits in error for a particular block, and is denoted by equation (1.1.3):
                              μ          E                =                              K            E                    K                                    (                  1.1          ⁢          .3                )            Non-Binary Codes
In addition to binary codes, non-binary codes may also be constructed in which information symbols {bk, k=1, . . . , K} belong to an alphabet Fb and codeword symbols {cn, n=1, . . . , N} belong to an alphabet Fc, and at least one of Fb and Fc is non-binary. A non-binary code can be regarded as systematic if there is a one-to-one function ƒ:FbM→Fc such that ƒ(bmM+1, bmM+2 b(m+1)M)=cm, m=1, 2, . . . K, i.e. if the information symbols can be grouped in M-tuples to produce the systematic codeword symbols, where M=log|Fc|/log|Fb|≧1. In this case, the code can be regarded as a systematic code with rate R=K/(K+PM) where P=(N−K). The quantity P can be regarded as the number of parity symbols. For example, if F1={0,1} (binary) and Fc={0,1,2,3}, and ck=2b2k+b2k+1(M=2), the codeword symbols c1, c2, . . . cK are systematic symbols.
For a non-binary systematic code, the decoder may accept lists of symbol probabilities {right arrow over (λ)}in,n={p(cn=c}, cεFc}, n=1, . . . , N, as input and compute output symbol probabilities for the systematic symbols, {p(ck=c|{{right arrow over (λ)}in,n,n=1, . . . , N}), cεFc} for k=1, 2, . . . K. Contrary to the binary-alphabet case, the set of probabilities for a non-binary symbol alphabet cannot be summarized by a single scalar value.
A list of |F|−1 probability values should be maintained for each symbol for an alphabet size of |F|. The a posteriori symbol probabilities can then be computed by simply summing over those output symbol probabilities which are consistent with the symbol value. Mathematically,
                              p          ⁡                      (                                          b                                  mM                  +                  i                                            =                              b                ❘                                  {                                                                                    λ                        →                                                                                              i                          ⁢                                                                                                          ⁢                          n                                                ,                        n                                                              ,                                          n                      =                      1                                        ,                    …                    ⁢                                                                                  ,                    N                                    }                                                      )                          =                              ∑                          c              ∈                                                S                  i                                ⁡                                  (                  b                  )                                                              ⁢                      p            ⁡                          (                                                c                  k                                ❘                                  {                                                                                    λ                        ->                                                                                              i                          ⁢                                                                                                          ⁢                          n                                                ,                        n                                                              ,                                          n                      =                      1                                        ,                    …                    ⁢                                                                                  ,                    N                                    }                                            )                                                          (                  1.1          ⁢          .4                )            where Si(b) is the subset of Fc consisting of those c values for which c=ƒ(bmM+1 bmM+2 b(m+1)M) with bmM+i=b. The decoder would then deliver symbol estimates for input symbol bm by choosing the most-likely symbol, i.e. the decoder would declare bm=bEST,m where bEST,m=arg maxbεFbp(bmM+i=b|{{right arrow over (λ)}in,n,n=1, . . . , N}). In this case, an instantaneous symbol error rate (SER) can be computed analogously to equation (1.3), as the ratio of erroneous input symbols in the decoded block.1.2 Adaptive-Coding and Incremental/Decremental Redundancy
Digital communication systems can benefit from using multiple code-rates by using lower code rates when the channel is unreliable, thereby reducing the probability of erroneous reception, and by using higher code rates when the channel is favorable, limiting the amount of overhead for reliable communication. The use of different code rates for matching the channel reliability is referred to as adaptive coding. Adaptive coding is a particular means of link adaptation, which is a generic term for methods of changing the attributes of a transmitted signal based on the conditions of the radio link. In adaptive coding, the signal attribute that is changed is the code rate.
The implementation of adaptive coding is facilitated by the use of a systematic (K+P, K) binary code, in which subsets of parity bits are punctured to achieve higher rates than the mother code rate R=K/(K+P). Punctured parity bits are not transmitted as part of the final codeword. On the receive-end, the decoder simply depunctures the corresponding punctured bits by declaring a zero input soft-information for the corresponding locations of the received word. For an input message size of K bits, a family  of code rates can be obtained by varying the amount of puncturing:
                            =                  {                                    R              i                        =                                          K                                  K                  +                                      P                    i                                                              :                              0                <                                  P                  i                                ≤                                  P                  max                                                              }                                    (                  1.2          ⁢          .1                )            
Such a family of codes is called a rate-compatible systematic code family. An example of a rate-compatible systematic code family is the Flexible-LDPC (F-LDPC) codes of TrellisWare Technologies.
Adaptive coding (as all forms of link-adaptation) assumes feedback from the receive-side to the transmit-side regarding prevalent channel conditions. In its simplest form, the feedback information from the receive-side is an acknowledgement (ACK) message for a block or groups of blocks that are estimated to be error-free by means of an outer error-detection encoding circuit (separate from the FEC which may or may not exist). The arrival of a negative acknowledgement (NAK) message at the transmit-side is interpreted as a failure and retransmission of blocks is requested. This is the operation principle of the basic Automatic Repeat reQuest (ARQ) protocol.
When the communication system has a (systematic) FEC code, a hybrid-ARQ (H-ARQ) protocol can be employed for increased efficiency. In type-II H-ARQ, a block of message bits are first transmitted without the parity bits originating from the FEC, with only extra bits originating from the error-detection scheme. If error detection passes on the receive side, an ACK is sent. If the receive-side error detection indicates presence of errors (failed block), a NAK is issued and a retransmission of only parity bits (originating from the encoding of the message bits and the error-detection bits) is generated. If parity transmission fails as well, the algorithm repeats and message bits (as well as error detection bits) are retransmitted. The decoder can then combine the two copies of received message words to achieve a greater reliability.
When the system is equipped with a rate-compatible systematic FEC, a finer resolution of code rates could be used for retransmissions, by incrementally transmitting more parity bits until error-detection passes. This general incremental parity-retransmission scheme is known as incremental-redundancy protocol. An incremental-decremental redundancy scheme comprises a method for determining the incremental parity size following a NAK, and a method for determining the code-rate to be used for a new data packet following an ACK. The latter rate is called the initial transmission rate.
The efficacy of an incremental-redundancy scheme is determined by its average throughput, which is the average amount of information transmitted per unit time. The throughput usually includes penalties for extra parity requests as well as retransmissions.
If the incremental parity step is too small, it may take many parity requests and retransmissions to receive an ACK, and the average throughput is diminished by the protocol overhead. Even though the throughput cost of a parity request may be low, significant latency for successful decoding of blocks may hinder practicality. If the parity step is too large, the FEC may be operating in a much lower rate than it could, leading to reduced throughput through unnecessarily high number of parity bits during favorable communication conditions.
One shortcoming of the aforementioned incremental-redundancy scheme is the inability to adjust to improving channel conditions. Consider a scenario where the channel condition is improving during the course of a series of transmitted blocks, so that blocks could be encoded using fewer parity bits (less redundancy) while being successfully decoded. However, after the receiving of an ACK for the one of these blocks, an incremental-redundancy scheme may not be able to reduce the overhead for subsequent blocks unless the transmit side goes through all the incremental-parity steps starting from the highest code-rate (which may just be uncoded transmission), causing a significant reduction in throughput.
Techniques that overcome these and other shortcomings of conventional coded ARQ techniques are desired.