LDPC codes were introduced by Gallager in 1962 and rediscovered in 1996 by Mac Kay and Neal. For a long time they had no practical impact due to their computational and implementation complexity. This changed with advances in microelectronics that led to more computational power at hand for simulation and which now enables implementation. Due to their excellent error correction performance they are considered for future telecommunication standards.
An LDPC code is a linear block code defined by its sparse M×N parity check matrix H. It contains j ones per column and k ones per row, called row and column degree respectively. A (j,k)-regular LDPC code has row and column degree of uniform weight, otherwise the code is called irregular. A parity check code can be represented by a bipartite graph also called Tanner graph. The M check nodes correspond to the parity constraints, the N variable nodes represent the data symbols of the codeword. An edge in the graph corresponds to a one in the parity check matrix.
In the LDPC code encoder the packet to encode of size (N−M) is multiplied with a generator matrix G of size (N−M)×N. This multiplication leads to an encoded vector of length N. The generator matrix G and the parity check matrix H satisfy the relation GHt=0 where 0 is the null matrix.
Generally speaking a LDPC code decoder comprises a decoding module which receives the encoded vector of length N and delivers an intermediate vector of length N by using the parity check matrix H. Then a demapping module extracts from said intermediate vector the decoded vector of length (N−M). More precisely LDPC codes can be decoded using message passing algorithms, either in hard or soft decision form. The decoding is then an iterative process, which exchanges messages between variable and check nodes. Typically a Belief Propagation (BP) algorithm is used, which exchanges soft-information iteratively between variable and check nodes. The code performance mainly depends on the randomness of the parity check matrix H, the codeword size N and the code rate R=(N−M)/N.
The channel coding part is a very important component in wireless communication systems like UMTS, WLAN and WPAN. Especially in the domain of WLAN and WPAN the latency of the decoding may be of a critical importance. Low Density Parity Check codes can be seen as a promising candidate for this kind of system in the near future. These codes are being deployed in the DVB-S2 standard and in some optical fiber communication systems. More applications will follow in the near future.
The codes have some very interesting properties, which make them a natural choice for latency critical application. The new DVB-S2 standard features a powerful forward error correction (FEC) system, which enables transmission close to the theoretical limit, and that is enabled by using LDPC codes, which can even outperform Turbo-Codes. To provide flexibility, 11 different code rates (R) ranging from R=1/4 up to R=9/10 are specified with a codeword length up to 64800 bits. This huge maximum codeword length is the reason for outstanding communication performance, so the codeword length of 64800 bits is described.
For the DVB-S2 code, 64800 so called variable nodes (VN) and 64800×(1−R) check nodes (CN) exist. The connectivity of these two types of nodes is specified in the standard. For decoding the LDPC code, messages are exchanged iteratively between these two types of nodes, while the node processing is of low complexity. Generally, within one iteration, first the variable nodes (VN) are processed, then the check nodes (CN).
For a fully parallel hardware realization, each node is instantiated and the connections between the nodes are hardwired. But even for relatively short block length like 1024 bits, severe routing congestion problems exist. Therefore, a partly parallel architecture may become mandatory for larger block length, wherein only a subset of nodes are instantiated. A network has to provide the required connectivity between variable nodes and check nodes. But realizing any permutation pattern is very costly in terms of area, delay and power.
To avoid this problem a decoder-first design approach was presented in “Decoder first code design” by E. Boutillon, J. Castura, and F. Ksschischang (2nd International Symposium on Turbo codes and Related Topics, pages459-462, Brest, France, September 2000). First an architecture is specified and afterwards a code is designed which fits this architecture. This approach is only suitable for regular LDPC code where each variable node has the same number of incident edges, the check nodes respectively. But for an improved communications performance so called irregular LDPC codes are mandatory, where the variable nodes are of varying degrees. This is the case for the DVB-S2 code. In “Design Methodology for IRA Codes” by F. Kienle and N. Wehn (Proc. 2004 Asia South Pacific Design Automation Conference, Yokohama, Japan, January 2004), a design method for irregular LDPC codes which can be efficiently processed by a decoder hardware is presented.
Such decoder hardware requires separate memories for mapping the information nodes and the check nodes. Generally speaking for a partly parallel LDPC decoder architecture, each message in the Tanner graph has to be stored. Due to the use of the two-phase algorithm (first the variable nodes are processed and then the check nodes), two separate RAM banks are required to store all the updated messages. Both RAM banks can be merged by using dual port RAMs, where it is possible to read one message and write another every cycle from different addresses. However using dual-port RAMs is area and power consuming.