Low-Density Parity-Check (LDPC) codes were introduced by Gallager in 1962 and rediscovered in 1996 by MacKay and Neal. For a long time they had no practical impact due to their computational and implementation complexity. This changed with advances in microelectronics that led to more computational power available for simulation, and which now enables implementation. Due to their excellent error correction performance they are considered for future telecommunication standards.
An LDPC code is a linear block code defined by its sparse M×N parity check matrix H. It contains j ones per column and k ones per row, respectively called row and column degrees. A (j,k) regular LDPC code has row and column degrees of uniform weight, otherwise the code is irregular. A parity check code can be represented by a bipartite graph. The M check nodes correspond to the parity constraints, and the N variable nodes represent the data symbols of the codeword. An edge in the graph corresponds to a one in the parity check matrix.
In the LDPC code encoder the packet to encode of size (N−M) is multiplied with a generator matrix G of size (N−M)×N. This multiplication leads to an encoded vector of length N. The generator matrix G and the parity check matrix H satisfy the relation GHt=0, where 0 is the null matrix.
Generally speaking, an LDPC code decoder comprises a decoding module which receives the encoded vector of length N, and delivers an intermediate vector of length N by using the parity check matrix H. Then a demapping module extracts from the intermediate vector the decoded vector of length (N−M).
More precisely, LDPC codes can be decoded using message passing algorithms, either in hard or soft decision form. The decoding is then an iterative process, which exchanges messages between variable and check nodes. Typically, a Belief Propagation (BP) algorithm is used, which exchanges soft-information iteratively between variable and check nodes. The code performance mainly depends on the randomness of the parity check matrix H, the codeword size N and the code rate R=(N−M)/N.
The channel coding part is a very important component in wireless communication systems, like UMTS, WLAN and WPAN. Especially in the domain of WLAN and WPAN, the latency of the decoding is of critical importance. Low Density Parity Check codes can be seen as a promising candidate for these kinds of systems in the near future. These codes are being deployed in the DVB-S2 standard, and in some optical fiber communication systems. More applications will follow in the near future.
The codes have some very interesting properties, which make them a natural choice for latency critical applications.
The new DVB-S2 standard features a powerful forward error correction FEC system, which enables transmission close to the theoretical limit. That is, enabled by using LDPC codes which can even out perform Turbo-Codes. To provide flexibility, 11 different code rates R ranging from R=¼ up to R= 9/10 are specified with a codeword length up to 64800 bits. This huge maximum codeword length is the reason for outstanding communication performance. The codeword length of 64800 bits will now be described.
For the DVB-S2 code, 64800 variable nodes (VN) and 64800×(1−R) check nodes (CN) exist. The connectivity of these two types of nodes is specified in the standard. The variable nodes comprise information nodes and parity nodes. For decoding the LDPC code, messages are exchanged iteratively between these two types of nodes, while the node processing is of low complexity. Generally, within one iteration, the variable nodes (VN) are first processed, then the check nodes (CN).
The decoding of LDPC codes is an iterative process, for example, for the DVB-S2 standard up to 40 iterations are required to gain the desired communication performance. Standard LDPC code decoder implementations assume a fixed number of iterations. In “A 690-mW 1-Gb/s, Rate-½ Low-Density Parity-Check Code decoder” by A. J. Blanksby and C. J. Howland, published in the IEEE Journal of Solid-State Circuits, vol. 37, no. 3, pp. 404-412 and March 2002, 64 decoding iterations are performed.
For decodable blocks or codewords, the stopping criteria taking into account the check node sum of the log-likelihood ratios of the incident edges is a very good stopping criteria, but for undecodable blocks or codewords, the full number of iterations is processed so a lot of energy and processing time is wasted.