As is well known in the art, Low Density Parity Check (LDPC) code processes provide a high performance error correction technique for communications systems.
LDPC codes are a subset of what are more generally known as ‘Sparse Graph Codes’. In this, their structure can be described in terms of a bipartite (or ‘Tanner’) graph, with two types of nodes, namely ‘Variable Nodes’ (VN) and Constraint (or ‘Check’) Nodes (CN). The number of VNs will typically correspond to the number of transmitted code bits in an encoded data block (such as a Forward Error Correction, FEC, block), and the number of CNs will correspond to the number of parity bits within the encoded data block.
By way of illustration only, FIG. 1 shows a bipartite graph of a rate-¼ low-density parity-check code with an encoded block length N=8, and M=6 constraints. A respective variable node (VN) is provided for each bit of the N=8-bit encoded data block. Each bit participates in j=3 constraints. Each constraint is implemented by a respective constraint node (CN), each of which operates to force the sum of the bits received from its k=4 neighbour VNs to an even value.
Within the Tanner graph of FIG. 1, VNs are connected to CNs are joined by bidirectional links (or ‘edges’). A node which is connected to a first node via an edge is known as the ‘neighbour’ of the first node, and vice versa. The number of edges leaving a node (or, equivalently, the number of neighbours of that node) defines the ‘degree’ of a node, with node degrees typically being in the range of 2-16. The code structure can also be defined in the form of a conventional ‘Parity Check’ matrix, H, which is a sparse distribution of 1s within a matrix which otherwise consists only of zeros. Each column of the H matrix corresponds to a VN, and each row corresponds to a CN. Each edge connecting a VN to a neighbour CN within the Tanner graph corresponds with a 1 in the H matrix where the corresponding rows and columns intersect. The H matrix corresponding to the Tanner graph of FIG. 1 is:
  H  =      [                            1                                                                          1                                                                          1                                                                          1                                                                                                                                      1                          1                          1                                                                                                                                                                                                                                      1                          1                                                                                                                          1                          1                                                                          1                                                                                                                                      1                                                                                                                          1                          1                          1                                                                                      1                                                                          1                                                                          1                          1                                                                                      1                                                                                                                          1                          1                                                                                                                          1                      ]  
The degree of a VN is equal to the weight (i.e. number of 1s) of the corresponding column, and the degree of a CN is equal to the weight of the corresponding row. FIG. 2 shows an alternative view of the LDPC of FIG. 1, in which the bidirectional edges of the Tanner graph are illustrated using a physically realizable grid of lines extending from each node.
As noted above, each CN defines an even parity check constraint, in that it forces the sum of the bits (variable nodes) to which it is connected to an even value. Let us consider whether a given bit sequence d (i.e. sequence of 1s and 0s) can be considered to be a valid codeword. First we need to write the 1s and 0s into the VNs. Then we need to check that each CN is connected to an even number of VNs containing the value 1. If this condition is satisfied for all of the CNs, then the bit sequence we are considering qualifies as a valid codeword for this particular LDPC code. An equivalent representation of this process is to post-multiply (modulo 2) the parity check matrix H by the bit sequence d (a column vector). If the result (the ‘syndrome’) is all zeros:Hd=[0 0 0 . . . 0]T  (1)
then the bit sequence is a valid codeword for the LDPC codeword defined by H. The ‘codebook’ of H is defined as the set of bit sequences which satisfy equation (1).
The example LDPC code described above with reference to FIGS. 1 and 2 utilizes 8 VNs and 6 CNs, which is appropriate for the case of an encoded blocklength of N=8 bits, and M=6 constraints. While this is sufficient for illustration purposes, practical implementations will normally be designed for very much larger encoded block sizes. For example, encoded blocks of N=20,000 bits or larger may be used, implying a Tanner graph having an equivalent number (e.g. N=20,000) of variable nodes (VNs). For a useful LDPC code of rate 0.75, an encoded block of N=20,000 bits would imply a requirement for 5000 CNs.
As is well known in the art, LDPC decoding can be implemented in software, hardware, or a combination of the two. For very high speed systems (for example, in a FEC decoder for processing a channel signals having a line rate of 40 Gbps or faster), hardware implementations are normally preferred.
As is also known in the art, for encoded block sizes large enough to provide reasonable performance, all of the effective decoding strategies for low-density parity-check codes are message-passing algorithms. The best algorithm known in the art is the sum-product algorithm, also known as iterative probabilistic decoding or belief propagation. A brief description of the Belief Propagation (BP) algorithm is provided below. This algorithm may sometimes be referred to as the “Message-Passing Algorithm” (MPA) or the “Sum-Product Algorithm” (SPA). We will prefer the term “Belief Propagation” in the present application, but may in some places use the various terms interchangeably.
The structure of the BP algorithm is tightly linked to the structure of the code's Tanner graph. Each VN and CN operates to compute and pass messages to their immediate neighbour nodes, in lockstep, along the edges of the graph. A message cycle from each VN to its neighbour CNs, and then from each CN to its neighbour VNs, is considered to constitute a single “iteration” of the belief propagation algorithm. The messages calculated by any given VN represents what that VN “believes” is the likelihood that it's bit value within the decoded block has a logical value of “0”; based on the Log-Likelihood Ratio (LLR) information sample for that bit position obtained from the received signal, and the messages received from its neighbour CNs during the previous iteration. Mathematically, this may be represented as:Vi=Vn−Cmi, i=1 . . . j 
where: Vi is the message output to the ith CN; Cmi is the message received from the ith CN; and
      Vn    =                            ∑                      h            =            1                    j                ⁢                  Cm          h                    +              LLR        ⁡                  (          x          )                      ,where LLR(x) is the LLR sample value for that VN's bit position obtained from the received signal.
The message calculated by any given CN, and sent to a given neighbour VN represents what that CN “believes” is the likelihood that the neighbour VN's bit value within the decoded block has a logical value of “0”; based on the most recent messages received from the other VNs to which that CN is connected. Mathematically, this may be represented as:
            Cm      i        =                  Sign        ⁡                  (                      Vm            i                    )                    ⁢                        ∏                      h            =            1                    k                ⁢                                  ⁢                              Sign            ⁡                          (                              Vm                h                            )                                ⁢                      θ            ⁢                                                  [                                                            ∑                                      h                    =                    1                                    k                                ⁢                                  θ                  ⁢                                                                          ⁢                                      (                                          Vm                      h                                        )                                                              -                              θ                ⁢                                                                  ⁢                                  (                                      Vm                    i                                    )                                                      ]                                ,i=1 . . . k
Where Cmi is the message sent to the ith VN; Vmi is the message received from the ith VN; and Vmh, h=1 . . . k are the messages received from all of the k neighbour VNs to which the CN is connected.
With each successive iteration, the confidence level in the logical value taken by each VN will tend to increase. Normally, the BP algorithm will iterate until a predetermined criterion is satisfied. Typical criteria may include a maximum permitted number of iterations; or determining that each CN is connected to an even number of VNs containing the value 1, as described above.
As may be seen in the above equations, the message sent to each node explicitly excludes the effects of the message received from that node. Thus, in the example of FIGS. 1 and 2 a VN is connected to three neighbour CNs, and so will compute a respective message for a given neighbour CN which takes into account the VN's LLR sample and only the messages received from the other two of its neighbour CNs. The messages sent by a CN to each of its neighbour VNs is restricted in the same manner. Thus, in the example of FIGS. 1 and 2 a CN is connected to four neighbour VNs, and so will compute a respective message for a given neighbour VN which takes into account only the messages received from the other three of its neighbour VNs. This avoids a problem of the probability computed by a VN during each iteration being distorted by its own probability calculation from the previous iteration.
Following the above description it will be appreciated that, in general, every node computes and sends a uniquely different message to each of its neighbours during each iteration. For software implementations of the PB algorithm, this results in a requirement for the computation and buffering of a very large number ([N*j]+[M*k]) of messages during each iteration, which is time consuming, and thus limits the maximum line rate of a signal that can be successfully decoded using this technique. Hardware implementations may avoid this problem by allowing each node to be implemented by a respective computation block, to thereby exploit the speed of massively parallel processing for calculation of messages. However, in this scenario, the edges of the Tanner graph must also be implemented using physical connections between processing blocks. In this respect, it will be recalled that high speed physical wire connections inside an integrated circuit are generally implemented to be un-idirectional. In a Complementary Metal Oxide Semiconductor (CMOS) integrated circuit, bidirectional connections generally suffer unduly from excessive capacitance, increased heat dissipation, and multiplexing delays. As such, physically implementing bi-directional connections between nodes requires two nominally parallel physical connections, one for carrying messages in each direction. Consequently, a hardware implementation of the LDPC code will require [N*j]+[M*k] discrete wire connections between nodes. These issues create a problem in that, for encoded block sizes large enough to provide reasonable performance, it is extremely difficult to achieve a practical solution for routing the physical connections between the code blocks.
In that respect, it may be noted that the arrangement illustrated in FIG. 2 is, at least in theory, physically realizable in an integrated circuit. However, it will be seen that as the number of nodes increases, the number of physical connections also must increase, as will the amount of area of the integrated circuit devoted to those connections. It will also be seen that the average length of connections between nodes will be approximately one-half the dimension of the integrated circuit. For an LDPC having enough VNs and CNs to provide reasonable performance, and implemented on an IC that is small enough to provide a reasonable yield (e.g. approximately 4 cm2 or less), this can result in several kilometres of wire connections within the integrated circuit. The heat generated within such long lengths of wire connections can pose a still further barrier to successful implementation of a practical LDC integrated circuit.
Techniques enabling implementation of LDPC codes in high speed signal processing systems remain highly desirable.