1. Field of the Invention
The present invention relates to a decoding device, a decoding method, a receiving device, and a storage medium reproducing device applied to for example a circuit for implementing error correcting code technology using an algebraic method and a program storage medium.
2. Description of the Related Art
A decoding method is known which utilizes algebraic properties of algebraic geometry codes, for example a Reed-Solomon code and a BCH (Bose-Chaudhuri-Hocquenghem) code, which is a subfield subcode of the Reed-Solomon code, and which method is excellent in terms of both performance and calculation cost.
For example, supposing that a Reed Solomon code having a code length n, an information length k, a definition body GF(q) (q=pm, p: prime number), and a minimum distance d=n−k is RS(n, k), minimum distance decoding (normal decoding) that decodes a hard decision received word into a codeword having a minimum Hamming distance is well known as a method ensuring correction of t error symbols satisfying t<d/2.
In addition, Guruswami-Sudan list decoding (hereinafter referred to as G-S list decoding) ensures correction of t error symbols satisfying t<√nk (see V. Guruswami, M. Sudan, “Improved Decoding of Reed-Solomon and Algebraic-Geometry Codes,” IEEE Transactions on Information Theory, Vol. 45, pp. 1757-1767, 1999).
It is known that Koetter-Vardy list decoding (hereinafter referred to as K-V list decoding) using a soft decision received word as an extended version of the Guruswami-Sudan list decoding includes, as with the Guruswami-Sudan list decoding, four procedures of: (1) calculating the reliability of each symbol from received information; (2) extracting two-variable polynomial interpolation conditions from the reliability; (3) interpolating a two-variable polynomial; and (4) creating a decoded word list by factorization of the interpolation polynomial. It is known that the K-V list decoding has a higher performance than hard decision decoding (see R. Koetter, A. Vardy, “Algebraic Soft-Decision Decoding of Reed-Solomon Codes,” IEEE Transactions on Information Theory, 2001).
It is also known that re-encoding can reduce calculation cost thereof to a practical range (see R. Koetter, J. Ma, A. Vardy, A. Ahmed, “Efficient Interpolation and Factorization in Algebraic Soft-Decision Decoding of Reed-Solomon Codes,” Proceedings of ISIT 2003).
On the other hand, a low density parity check code (LDPC code) that enables a high performance close to a limit performance to be obtained by iterative decoding using belief propagation (BP) has recently been drawing attention as a linear code (see D. MacKay, “Good Error-Correcting Codes Based on Very Sparse Matrices,” IEEE Transactions on Information Theory, 1999).
It is theoretically known that the belief propagation (BP) used for the LDPC code is generally effective only for linear codes having a low density parity check matrix. It is also known that lowering the density of the parity check matrix of a Reed-Solomon code or a BCH code is NP-hard (Nondeterministic Polynomial-time hard) (see Berlekamp, R. McEliece, H. van Tilborg, “On the Inherent Intractability of Certain Coding Problems,” IEEE Transactions on Information Theory, vol. 24, pp. 384-386, May 1978).
Thus, it has been considered to be difficult to apply belief propagation to Reed-Solomon codes or BCH codes.
In 2004, however, Narayanan et al. introduced an effective application of belief propagation (BP) to Reed-Solomon codes, BCH codes, and other linear codes having a parity check matrix whose density is not low, using a parity check matrix diagonalized according to the reliability of a received word (see Jing Jiang, K. R. Narayanan, “Soft Decision Decoding of RS Codes Using Adaptive Parity Check Matrices,” Proceedings of IEEE International Symposium on Information Theory 2004).
This method is referred to as adaptive belief propagation (ABP) decoding. This ABP decoding method will be described in the following.
For example, consideration will be given to a linear code C that has a code length n=6, an information length k=3, and a coding ratio r=1/2, and which has the following 3×6 matrix H as a parity check matrix.
  H  =      (                            1                          0                          1                          0                          0                          1                                      1                          1                          0                          1                          0                          1                                      0                          1                          1                          1                          1                          1                      )  
A code space C is expressed as follows.C={c=(c1,c2, . . . , c6),c1,c2, . . . , c6 ε 0,1|H·ct=0}
Suppose that a certain codeword passes through a certain channel, for example a BPSK (Binary Phase Shift Keying) modulation+AWGN (Additive While Gaussian Noise) channel, and is thereafter received as the following received word by a receiver.r=(r1,r2, . . . ,r6)=(0.4, 1.2, 0.1, 0.7, 0.3, 0.7)
At this time, the magnitude of the absolute value of each received value indicates the height of the reliability of the received word. That is, numbers are set to the received values in increasing order of reliability as follows.<3> <6> <1> <4> <2> <4>r=(0.4, 1.2, 0.1, 0.7, 0.3, 0.7)
Next, the diagonalization of the parity check matrix H is performed in order starting with a column corresponding to the symbol whose reliability is lowest. In this example, columns corresponding to the symbols of increasing reliability are a third column, a fifth column, a first column, a fourth or sixth column, and a second column in this order. The diagonalization of the parity check matrix H is performed in this order of priority.
<1> Basic Transformation with Third Column as Key
  H  =      (                            1                          0                                      ◯            ⁢            1                                    0                          0                          1                                      1                          1                          0                          1                          0                          1                                      1                          1                          0                          1                          1                          0                      )  <2> Basic Transformation with Fifth Column as Key
  H  =      (                            1                          0                                      ◯            ⁢            1                                    0                          0                          1                                      1                          1                          0                          1                                      ◯            ⁢            1                                    0                                      1                          1                          0                          1                          0                          1                      )  <3> Basic Transformation with First Column as Key
      H    new    =      (                            0                          1                                      ◯            ⁢            1                                    1                          0                          0                                      0                          0                          0                          0                                      ◯            ⁢            1                                    1                                                  ◯            ⁢            1                                    1                          0                          1                          0                          1                      )  
When a column on which diagonalization is attempted is linearly dependent to a column which has previously been diagonalized, the column on which diagonalization is attempted is left as it is, and diagonalization is attempted on a next column in the order.
A parity check matrix Hnew obtained as a result of thus performing diagonalization a number of times equal to the number of ranks of the matrix H is used to update reliability by belief propagation (BP).
FIG. 1 is a Tanner graph corresponding to a parity check matrix Hnew.
Belief propagation is implemented by allowing a message, that is, an LLR (Log Likelihood Ratio) to come and go along an edge of the Tanner graph.
Specifically, when transmission and reception is performed between additive Gaussian noise channels with BPSK modulation, the LLR Ri of each bit is given by the following equation:Ri=(2/σ)·ri 
where σ is the variance of Gaussian noise, and ri denotes a received value.
A node corresponding to each column of the matrix is referred to as a variable node, and a node corresponding to each row of the matrix is referred to as a check node.
Letting Qi,j be a message from an ith variable node to a jth check node, and Ri,j be a message from the jth check node to the ith variable node, and letting J(i) be an index set of check nodes connected to the ith variable node, and I(j) be an index set of variable nodes connected to the jth check node, respective updating equations of the message Ri,j and the message Qi,j are as follows:
            R              i        ,        j              =                  ∏                  l          ∈                                    I              ⁡                              (                j                )                                      ⁢            \            ⁢            i                                                        ⁢                          ⁢                        sign          ⁡                      (                          Q                              l                ,                j                                      )                          ·                  f          (                                    ∑                              l                ∈                                                      I                    ⁡                                          (                      j                      )                                                        ⁢                  \                  ⁢                  i                                                                                                  ⁢                          f              ⁡                              (                                                                        Q                                          l                      ,                      j                                                                                        )                                              )                                Q              i        ,        j              =                  R        i            +              θ        ·                              ∑                          l              ∈                                                J                  ⁡                                      (                    i                    )                                                  ⁢                \                ⁢                j                                                                                    ⁢                      R                          i              ,              l                                          
where θ is a coefficient referred to as a vertical step damping factor, and satisfies a condition 0<θ≦1. Further, f(X)=ln {(exp(x)+1)/(exp(x)−1)}.
rj is set as an initial value of the message Qi,j, and extrinsic information Λix is updated by the following equation:
      Λ    i    x    =            ∑              l        ∈                  J          ⁡                      (            i            )                                                    ⁢          R              i        ,        l            
Further, the LLR Λiq of each code bit is updated by the following equation:Λiq=ri+α1Λix 
where α1 is a coefficient referred to as an adaptive belief propagation damping factor, and satisfies a condition 0<α1≦1.
The updating of the LLR by the belief propagation (BP) is repeated until an iterative decoding stopping condition prepared in advance is satisfied, for example until a maximum iteration number ItH is reached.
Using the reliability of the LLR updated by the belief propagation (BP), that is, using the magnitude of the absolute value of the LLR as reliability, columns are diagonalized in increasing order of reliability of the corresponding symbols, whereby iterative decoding by new belief propagation (BP) can be performed.
This will be referred to as inner iterative decoding. The updating of the LLR is repeated until an inner iterative decoding stopping condition SC1 prepared in advance is satisfied.
Further, a plurality of orders other than the order of reliability of received values are prepared as initial value of order of priority in diagonalization of the columns of the parity check matrix. The inner iterative decoding is repeatedly performed serially or in parallel using the plurality of orders.
This will be referred to as outer iterative decoding. The LLR update is repeated until an outer iterative decoding stopping condition SC2 prepared in advance is satisfied.
A decoder performs decoding with the LLR updated repeatedly by the above-described ABP (Adaptive Belief Propagation) procedure as an input.
Now, when a target linear code is a Reed-Solomon code, the following are considered as the iterative decoding stopping conditions SC1 and SC2, for example.
(A) H·d==0 or Iteration Number t≧N
(B) Success in Bounded Distance Decoding or Iteration Number t≧N
(C) Success in Koetter-Vardy Soft Decision List Decoding or Iteration Number t≧N
where d=(di, d2, . . . , d6) is a hard decision result of Λi, di={1 when Λiq>0, and 0 when Λiq≦0}, and N is a maximum number of iterations determined in advance.
In addition, the following are considered as decoding methods.
(a) Hard Decision Decoding
(b) Bounded Distance Decoding
(c) Koetter-Vardy Soft Decision List Decoding
In addition, for belief propagation (BP), an approximation method at a time of check node message calculation referred to as a UMP (Uniformly Most Powerful) decoding method is known (see M. P. C. Fossorier, M. Mihaljevic, H. Imai, “Reduced Complexity Iterative Decoding of Low-Density Parity Check Codes Based on Belief Propagation,” IEEE Transactions on Communications, Vol. 47, No. 5, May 1999).
The calculation equation is shown below.
      R          i      ,      j        =            ∏              l        ∈                              I            ⁡                          (              j              )                                ⁢          \          ⁢          i                                          ⁢                  ⁢                  sign        ⁡                  (                      Q                          l              ,              j                                )                    ·                        min                      l            ∈                                          I                ⁡                                  (                  j                  )                                            ⁢              \              ⁢              i                                      ⁢                                        Q                          l              ,              j                                                    
The use of the UMP decoding method reduces an amount of calculation because a complex calculation of an f-function is replaced with a minimum value search, but may result in degradation in performance due to approximation.
In addition, UMP decoding is realized by only a minimum value search for a message at a check node and adding a message at a variable node. Therefore, when transmission and reception is performed with BPSK modulation on an additive Gaussian noise channel, belief propagation (BP) is made possible by using a received value ri as an LLR even without channel information σ.