In recent years, error correction technology has been widely used in wireless, cable, and recording systems. The combination of low density parity check codes (LDPC) and its decoding method, the sum-product algorithm (SPA hereinafter) has very good decoding characteristics and is expected to be an excellent error correction code for the next generation. At the sending side, an encoder generates a check matrix H, which is later described, and a generator matrix G (k×n matrix; k: Information length, n: code word length) is derived (where GHT=0 (T denotes transposition matrix). The encoder then receives a message (m1, m2 . . . , mk) of an information length k, generates a code word (c1, c2, . . . , cn) from (m1, m2, . . . , mk) G using the generator matrix G (where (c1, c2, . . . , cn)HT=0), and modulates and sends the generated code word. A decoder receives the modulated signal via a channel and demodulates it. The decoder then subjects the demodulated result to an iterative decoding by SPA and provides an estimated result (corresponding to the original (m1, m2, . . . , mk)). (Refer to Patent Document 1.) The gist of LDPC code and SPA decoding will be described.
LDPC code is a linear code defined by a sparse check matrix. The check matrix of an LDPC code can be expressed by a bipartite graph called the Turner graph. Let's assume that a check matrix H, which is an M×N matrix, is given. Nodes that constitute the Turner graph are constituted by N bit nodes and M check nodes. The bit nodes and the check nodes correspond to the column and row of the check matrix H respectively, and when the elements in row i and column j of the check matrix are 1, the jth bit node and the ith check node are connected.
For instance, when the check matrix H is as given by an equation (1), the Turner graph is as shown in FIG. 5.
                    H        =                  [                                                    1111000000                                                                    1000111000                                                                    1000000111                                              ]                                    (        1        )            
Each bit node represents the received bit (each symbol of the code word) and each check node represents a parity check constraint (condition) among the bit nodes (the symbols) to which it is connected. In the aforementioned check matrix H, for a message of code word length 10 (m1, . . . , m10), a check node 1 corresponds to m1+m2+m3+m4=0, a check node 2 to m1+m5+m6+m7=0, and a check node 3 to m1+m8+m9+m10=0, as the parity check condition.
In SPA, decoding is performed by sending/receiving messages on a Turner graph constituted by a check matrix defining an LDPC code. A round of message passing between connected bit node and check node is called one iteration.
In order to obtain good decoding characteristics, multiple iterations are needed. A message Qnm from a bit node n to a check node m out of messages that are passed between the nodes is given by an equation (2). Further, a message Rmn from the check node m to the bit node n is given by an equation (3). Note that an estimate for the received bit is provided according to a computation result code of the message Qnm at the bit node after multiple iterations (depending on whether the code is positive or negative, 0 or 1).
                              Q                      n            ⁢                                                  ⁢            m                          =                              ln            ⁡                          [                                                                    p                    n                                    ⁡                                      (                    1                    )                                                                                        p                    n                                    ⁡                                      (                    0                    )                                                              ]                                +                      (                                          ∑                                                      m                    ′                                    ∈                                      μ                    ⁡                                          (                      n                      )                                                                                  ⁢                                                          ⁢                              R                                                      m                    ′                                    ⁢                  n                                                      )                    -                      R            mn                                              (        2        )            
Note that μ(n) of m′εμ(n) in (ΣRm′n), the summing operation of Rm′n about m′, in the equation (2) represents a set of check nodes adjacent to the bit node n. In other words, it is a set of row numbers obtaining 1 in an nth column of the check matrix H, and in the case of the check matrix H of the equation (1), μ(1)={1,2,3}, and μ(2)={1}.
Further, in the equation (2), ln[pn(1)/pn(0)] is an input LLR (Log Likelihood Ratio). Note that ln[pn(1)/pn(0)] represents the same thing as (ln[P(yn¦xn=0)/P(yn¦xn=1)]), which is described later.
                                                                        R                mn                            =                                                Φ                                      -                    1                                                  ⁢                                  {                                                            Φ                      (                                                                        ∑                                                                                    n                              ′                                                        ∈                                                          v                              ⁡                                                              (                                m                                )                                                                                                                                    ⁢                                                                                                  ⁢                                                  Q                                                                                    n                              ′                                                        ⁢                            m                                                                                              )                                        -                                          Φ                      ⁡                                              (                                                  Q                                                      n                            ⁢                                                                                                                  ⁢                            m                                                                          )                                                                              }                                *                                                                                        (                              sign                ⁢                                                                  ⁢                                  (                                      Q                                          n                      ⁢                                                                                          ⁢                      m                                                        )                                *                                                      ∏                                                                  n                        ′                                            ∈                                              v                        ⁡                                                  (                          m                          )                                                                                                      ⁢                                                                          ⁢                                      sign                    ⁢                                                                                  ⁢                                          (                                              Q                                                                              n                            ′                                                    ⁢                          m                                                                    )                                                                                  )                                                          (        3        )            
where:
                              Φ          ⁡                      (            x            )                          =                  -                      log            (                          tanh              (                                                1                  2                                ⁢                x                            )                        )                                              (        4        )            
(ΣQn′m), the summing operation of Qn′m about n′, and ν(m) of n′εν(m) in Π sign(Qn′m), the product operation of sign(Qn′m), in the equation (3) represent a set of bit nodes adjacent (connected) to the check node m. In other words, it is a set of column numbers obtaining 1 in an mth row of the check matrix H of the equation (1), and in the case of the check matrix H of the equation (1), ν(1)={1,2,3,4}.
When an LDPC decoder is realized, the check nodes are conventionally divided into a plurality of groups and message computations are pipeline-processed. This group division is called “clustering” and the order of computation is called “scheduling.”
FIG. 6 is a drawing showing how messages are passed when the LDPC code defined by the equation (1) is decoded in cluster size 1 (1 check node per cluster). In FIG. 6, how messages are passed between the bit nodes and the check nodes in the case of the Turner graph shown in FIG. 5 (cluster size 1) is shown. In FIG. 6, the bit nodes are indicated by the circled numbers (indicating each bit node number), the check nodes by the boxed numbers (indicating each check node number), and the arrows from the nodes indicate the passing of messages as in FIG. 5.
Messages Q11, Q21, Q31, and Q41 are sent from bit nodes 1, 2, 3, and 4 to the check node 1, and the check node 1 sends messages R11, R12, R13, and R14 to the bit nodes 1, 2, 3, and 4 respectively. Next, messages Q12, Q52, Q62, and Q72 are sent from the bit nodes 1, 5, 6, and 7 to the check node 2, and the check node 2 sends messages R21, R25, R26, and R27 to the bit nodes 1, 5, 6, and 7 respectively. Then, messages Q13, Q83, Q93, and Q103 are sent from the bit nodes 1, 8, 9, and 10 to the check node 3, and the check node 3 sends messages R31, R38, R39, and R310 to the bit nodes 1, 8, 9, and 10 respectively. The sequence of message passing described above constitutes one iteration.
The Turner graph in FIG. 5 does not include any loop. Here, loop means a circulating path that starts from a node.
When a Turner graph does not include any loop, SPA can compute accurate posterior probability.
On the other hand, a Turner graph defined by a check matrix H of an equation (5) includes a loop of length 4 as shown in FIG. 7. In FIG. 7, arrows indicate the direction of each message passed between nodes.
                    H        =                  [                                                    111000                                                                    011100                                                                    000111                                              ]                                    (        5        )            
In other words, as shown in FIG. 7, the path of the loop of length 4 is from a check node 1 to a bit node 3, from the bit node 3 to a check node 2, from the check node 2 to a bit node 2, and from the bit node 2 to the check node 1.
When a message goes around as above, accurate posterior probability cannot be computed by the decoder, resulting in deteriorated decoding characteristics. It is known that the shorter the loop is, the worse decoding characteristics become (Non-Patent Document 1).
When an LDPC decoder is realized, a majority of the chip area is occupied by a register or memory for holding messages and interconnect paths for sending/receiving messages.
Accordingly, a method for reducing the number of messages by approximating the equation (2) by an equation (6) has been proposed (Non-Patent Document 2).
                                          Q            n            ′                    ⁡                      (            k            )                          =                                            Q              n              ′                        ⁡                          (                              k                -                1                            )                                +                                    ∑                                                m                  ′                                ∈                                  {                                                            S                      ⁡                                              (                        k                        )                                                              ⋂                                          μ                      ⁡                                              (                        n                        )                                                                              }                                                      ⁢                                                  ⁢                          R                                                m                  ′                                ⁢                n                                                                        (        6        )            
In the equation (6), Rmn refers to messages from the check node m to the bit node n and can be given by the equation (3).
Further, S(k) of m′ε{S(k)∩μ(n)} in (εRm′n), the summing operation of Rm′n about m′, is a set of check nodes included in the cluster being computed at a time k, μ(n) is a set of check nodes adjacent to the bit node n, and ∩ represents AND. Therefore, in the summing operation of Rm′n about m′, the messages Rm′n from a check node m′ that is included in both S(k) and μ(n) are summed, Qn(k−1) at a previous time k−1 is added to the summed result, and Q′n(k), the message at the time k, is the result of this addition. The bit node n passes the same message Q′n(k) to the check nodes connected to the bit node n.
In the message computation process at the bit nodes, the messages Rmn from the check node m to the bit node n are computed for each cluster, and the computation results are added to Q′n.
The initial value Q′n(0) of Q′n(k) is the input (channel) LLR. LLR stands for Log Likelihood Ratio (ln[P(yn¦xn=0)/P(yn¦xn=1)]). Note that yI is the received symbol, xI is the transmitted symbol, nI is an additive white Gaussian channel (yi=xi+ni) for white Gaussian noise, and binary-bipolar conversion (0→+1, 1→−1) is executed.
As a result of having the equation (6) approximate the equation (2), one bit node sends the same message to all the adjacent check nodes (all the check nodes connected to the bit node). Therefore, resources such as registers for holding messages and interconnect paths can be greatly reduced.
Further, as a result of having the equation (6) approximate the equation (2), the amount of message computations can be reduced by 50 percent or more, compared to the equation (2). In other words, the computation speed and processing performance are improved.
[Patent Document 1]
    Japanese Patent Kokai Publication No. JP-P2003-244109A[Non-Patent Document 1]    D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Transactions on Information Theory, vol. 45, pp. 399-431 (1999).[Non-Patent Document 2]    Yeo, E.; Pakzad, P.; Nikolic, B.; Anantharam, V., “High throughput low-density parity-check decoder architectures,” Global Telecommunications Conference 2001, Volume: 5, 25-29 Nov. 2001, pp. 3019-3024.