Low Density Parity Code (LDPC) decoders are current generation iterative soft-input forward error correction (FEC) decoders that have found increasing popularity in FEC applications where low error floor and high performance are desired. LDPC decoders are defined in terms of a two-dimensional matrix, referred to as an H matrix, which describes the connections between the data and the parity. The H matrix comprises rows and columns of data and parity information. Decoding an LDPC code requires solving the LDPC code according to the H matrix based on a two-step iterative algorithm. Soft-decoding the code causes convergence of the solved code with the true code; convergence is achieved over a number of iterations and results in a corrected code with no errors.
A category of LDPC codes, known as quasi-cyclic (QC) codes, generates an H matrix with features that improve the ease of implementing the LDPC encoder and decoder. In particular, it is possible to generate a QC-LDPC H matrix where some rows are orthogonal to each other. These orthogonal rows are treated as a layer, and rows within a layer can be processed in parallel, thus reducing the iterative cost of the decoder. It is advantageous to reduce the number of iterations necessary to decode an LDPC code.
FIG. 1 is a block diagram of a known LDPC decoder 100. Noisy data arrives from a channel, as soft information, to the decoder 100 and is typically routed via an input 102 to a main memory 110 in a manner that avoids pipeline stalls. The main memory 110 comprises a plurality of memory elements. In an example implementation, each memory element is a two-port memory supporting one write and one read per clock cycle. Typically these memories will be implemented as two-port register files. A plurality of layer processors 120 are connected to the main memory 110, with each layer processor 120 operating in parallel with the other layer processors. A first adder 122 in the layer processor 120 removes the extrinsic information for the layer in the H matrix currently being operated on.
A check node 130 performs an approximation of the belief propagation method, such as the minsum method. A second adder 124 at the bottom combines the extrinsic information generated by the check node 130 with the channel information for the layer and provides it to the main memory 110 for storage for the next update. The delay element 128 feeds back the extrinsic information for the processing in the next iteration. The layer processors 120 are the dominant source of processing and power consumption in the LDPC decoder. The iterative decode process proceeds based on the specified H matrix until the decode process has completed either by converging to a solution or running out of processing time.
As an LDPC decoder iterates towards a solution, the processing steps in the layer processor 120 generate an increasing number of the same, or very similar, results as compared to previous iterations, resulting in convergence.
FIG. 2 is a graph illustrating convergence for variable and check nodes. FIG. 2 represents the results of an LDPC decoder, in progress with V1 to VN+M representing the variable nodes, and L1 to LC representing the layers. Shaded columns show the converged variable nodes of an LDPC code word. A variable node is converged when the sign-bit is correct and the magnitude of the data in the node is strong (in a belief propagation network the higher the magnitude of a node the stronger the confidence in that node). Shaded rows show the converged check nodes of an LDPC code word. A check node is converged when the minimum output of the check node is confident. As the variable and check nodes iterate the number of converged nodes increases and the graph above becomes increasingly greyed, until the final iteration when there are very few diffident nodes remaining.
Improvements in FEC decoding are therefore desirable.