1. Field of the Invention
This invention relates to a decoding method and a decoding apparatus as well as a program which are applied to a circuit and a program storage medium for implementing an error correction code technique which uses, for example, an algebraic method.
2. Description of the Related Art
A decoding method for an algebraic code such as, for example, a Reed-Solomon code or a BCH code as a subfield partial code of a Reed-Solomon code is known which makes use of an algebraic property of the code and is superior in both of the performance and the calculation cost.
For example, where a Reed-Solomon code having a code length n, an information length k, a definition GF(q) (q=pm, p: prime number) and a minimum distance d=n−k is represented by RS(n, k), it is well known that minimum distance decoding (normal decoding) which decodes a hard decision received word into a codeword having a minimum Hamming distance assures correction of t error symbols which satisfy t<d/2.
Meanwhile, list decoding (hereinafter referred to as G-S list decoding) by Guruswami-Sudan assures correction of t error symbols which satisfy t<√nk as disclosed in V. Guruswami and M. Sudan, “Improve decoding of Reed-Solomon and Algebraic-Geometry codes”, IEEE Transactions on Information Theory, vol. 45, pp. 1757-1767, 1999.
List decoding (hereinafter referred to as K-V list decoding) by Koetter-Vardy which uses soft decision received words as an extended version of list decoding of Guruswami-Sudan includes four steps of (1) calculation of the reliability of each symbol from received information, (2) extraction of two-variable polynomial interpolation conditions from the reliabilities, (3) interpolation of a two-variable polynomial, and (4) production of a decoded word list by factorization of the interpolation polynomial similarly to the G-S list decoding. It is known that the K-V list decoding has a higher performance than hard decision decoding. The K-V list decoding is disclosed in R. Koetter and A. Vardy, “Algebraic soft-decision decoding of Reed-Solomon codes”, IEEE Transactions on Information Theory, 2001.
Further, it is known that re-encoding can reduce the calculation cost to a level in a realistic region as disclosed in R. Koetter, J. Ma, A. Vardy and A. Ahmed, “Efficient Interpolation and Factorization in Algebraic Soft-Decision Decoding of Reed-Solomon codes”, Proceedings of ISIT2003.
On the other hand, as a linear code, a low density parity check code (LDPC code) by which a high performance near to a critical performance can be obtained by repeated decoding which uses belief propagation (BP) attracts attention recently. The LDPC code is disclosed in D. MacKay, “Good Error-Correcting Codes Based on Very Sparse Matrices”, IEEE Transactions on Information Theory, 1999.
It is known theoretically that generally the belief propagation (BP) used for the LDPC code is effective only for linear codes having a low density parity check matrix. Further, it is known as NP-hard to make a parity check matrix of Reed-Solomon codes or BCH codes sparse as disclosed in Berlekamp, R. McEliece and H. van Tilhog, “On the inherent intractability of certain coding problems”, IEEE Transactions on Information Theory, vol. 24, pp. 384-386, May, 1978.
Therefore, it has been considered difficult to apply the belief propagation (BP) to the Reed-Solomon code or the BCH code.
However, it was introduced by Narayanan et al. in 2,004 that it is effective to use a parity check matrix for which diagonalization is performed in response to the reliability of received words and apply the belief propagation (BP) to a linear code such as the Reed-Solomon code, the BCH code or any other linear code having a non-sparse parity check matrix, as disclosed in Jing Jiang and K. R. Narayanan, “Soft Decision Decoding of RS Codes Using Adaptive Parity Check Matrices”, Proceeding of IEEE International Symposium on Information Theory 2004.
This technique is called adaptive belief propagation (ABP) decoding. The ABP decoding method is described below.
For example, a linear code C having, for example, a code length n=6, an information length k=3 and a coding rate r=½ and having the following 3×6 matrix H as a parity check matrix is considered:
                    H        =                  (                                                    1                                            0                                            1                                            0                                            0                                            1                                                                    1                                            1                                            0                                            1                                            0                                            1                                                                    0                                            1                                            1                                            1                                            1                                            1                                              )                                    [                  Expression          ⁢                                          ⁢          1                ]            
The code space C is represented in the following manner:C={c=(c1, c2, . . . , c6), c1, c2, . . . , c6ε0,1|H·ct=0}  [Expression 2]
It is assumed that a certain codeword is received as the following received word r by a receiver after it passes, for example, BPSK modulation+AWGN channel (Additive White Gaussian Noise channel):r=(r1, r2, . . . , r6)=(0.4,1.2,0.1,0.7,0.3,0.7)  [Expression 3]
At this time, the magnitude of each of absolute values of received values represents the height of the reliability of the received word. In other words, if the received words are numbered in the ascending order of the reliability, then they are represented in the following manner:
                    r        =                  (                                    0.4                              <                3                >                                      ,                          1.2                              <                6                >                                      ,                          0.1                              <                1                >                                      ,                          0.7                              <                4                >                                      ,                          0.3                              <                2                >                                      ,                          0.7                              <                4                >                                              )                                    [                  Expression          ⁢                                          ⁢          4                ]            
Then, diagonalization of the parity check matrix H is performed in order from a column corresponding to a symbol having a comparatively low reliability. In the present example, the columns corresponding to those symbols which have comparatively low reliabilities are the third, fifth, first, fourth or sixth, and second columns in order. Therefore, diagonalization of the matrix H is performed in this order of priority.
[Expression 5]
<1> Basic transform with the third column set as a pivot
                    H        =                  (                                                    1                                            0                                                                                        0                                            0                                            1                                                                    1                                            1                                            0                                            1                                            0                                            1                                                                    1                                            1                                            0                                            1                                            1                                            0                                              )                                    [                  Expression          ⁢                                          ⁢          6                ]            <2> Basic transform with the fifth column set as a pivot
                    H        =                  (                                                    1                                            0                                                                                        0                                            0                                            1                                                                    1                                            1                                            0                                            1                                                                                        0                                                                    1                                            1                                            0                                            1                                            0                                            1                                              )                                    [                  Expression          ⁢                                          ⁢          7                ]            <3> Basic transform with the first column set as a pivot
                              H          new                =                  (                                                    0                                            1                                                                                        1                                            0                                            0                                                                    0                                            0                                            0                                            0                                                                                        0                                                                                                                1                                            0                                            1                                            0                                            1                                              )                                                
If a column whose diagonalization is attempted is linearly dependent on another column which has been diagonalized before then, then this column is left as it is, and it is attempted to diagonalize a different column which is next in the order.
A new parity check matrix Hnew obtained as a result of diagonalization performed for the rank of the matrix R in this manner is used to update the reliability by belief propagation (BP).
FIG. 1 is a Tanner graph corresponding to the parity check matrix Hnew.
Belief propagation (BP) is implemented by causing messages to come and go along an edge of a Tanner graph.
A node corresponding to each column of a matrix is called variable node 1, and a node corresponding to each row is called check node 2.
Where a message from an ith variable node to a jth check node is represented by Qi,j while another message from the jth check node to the ith variable node is represented by Ri,j and an Index set of check nodes connecting to the ith variable node is represented by J(i) and an index set of variable nodes connecting to the jth check node is represented by I(j), the updating expressions for them are such as given below:
                                          R                          i              ,              j                                =                      2            ⁢                                                  ⁢                                          tanh                                  -                  1                                            (                                                ∏                                      i                    ∈                                                                  f                        ⁡                                                  (                          j                          )                                                                    /                      j                                                                                                                              ⁢                                                                  ⁢                                  tanh                  ⁡                                      (                                                                  Q                                                  i                          ,                          j                                                                    /                      2                                        )                                                              )                                      ⁢                                  ⁢                              Q                          i              ,              j                                =                                    r              i                        +                          θ              ⁢                                                ∑                                      i                    =                                                                  j                        ⁡                                                  (                          i                          )                                                                    /                      j                                                                                                                              ⁢                                  R                                      i                    ,                    t                                                                                                          [                  Expression          ⁢                                          ⁢          8                ]            where θ is a coefficient called vertical step damping factor and satisfies a condition of 0<θ≦1. The initial value of Qi,j is set to rj, and updating of extrinsic information Λix is performed in accordance with the following expression:
                              Λ          i          x                =                              ∑                          ι              ∈                              J                ⁡                                  (                  i                  )                                                              ⁢                      R                          i              ,              ι                                                          [                  Expression          ⁢                                          ⁢          9                ]            
Further, updating of LLRΛiq of each code bit is performed in accordance with the following expression:Λiq=ri+αiΛix  [Expression 10]where α1 is a coefficient called adaptive belief propagation damping factor and satisfies a condition of 0<α1≦1.
The updating of the LLR by the belief propagation (BP) is repeated until a repetition stopping condition prepared in advance becomes satisfied, for example, until a maximum repetition number ItH is reached.
Meanwhile, all columns need not be determined as an object of updating of the LLR, but such updating may be performed for some of the columns, for example, only for those columns which are made as an object of diagonalization.
By performing diagonalization using the reliability of the LLR updated by the reliability propagation (BP), that is, using the magnitude of the absolute value of each LLR value as a reliability, in the order of columns corresponding to symbols having comparatively low reliabilities, repeated decoding by new belief propagation (BP) can be performed.
This is referred to as inner repeated decoding. This updating of the LLR is repeated until after an inner repeated decoding stopping condition SC1 prepared in advance is satisfied.
Further, a plurality of rankings other than the reliability of received values are prepared as initial values of the diagonalization priority ranking of columns of a parity check matrix. The inner repeated decoding is performed serially or parallelly using a plurality of rankings.
This is referred to as outer repeated decoding. This LLR updating is repeated until after an outer repeated decoding stopping condition SC2 prepared in advance becomes satisfied.
The LLR values updated repeatedly by the ABP (Adaptive Belief Propagation) procedure described above are inputted to a decoder to perform decoding.
Now, if it is assumed that the object linear code is a Reed Solomon code, the repeated decoding stopping conditions SC1 and SC2 may be, for example, such as given below:
(A) H·d== or repetition rate t≧N,
(B) Success in critical distance decoding or repetition rate t≧N,
(C) Success in Koetter-Vardy soft decision list decoding or repetition rate t≧N.
where d=(d1, d2, . . . , d6) is a result of the hard decision of Λi, di={l if di=(Λiq>0), but 0 if Λiq=0}, and N is a maximum number of cycles of repetition determined in advance.
Meanwhile, the following methods may be applied as the decoding method:
(a) Hard decision decoding
(b) Critical distance decoding
(c) Koetter-Vardy soft-decision list decoding
FIG. 2 illustrates repeated decoding which uses the ABP decoding method.
Referring to FIG. 2, a search for the reliability order of received words is performed (step ST1) and then order conversion is performed (step ST2).
Diagonalization of a parity check matrix is performed in response to the resulting order (step ST3), and belief propagation (BP) is performed using the parity check matrix (step ST4).
Thereafter, the LLR is calculated (step SP5), and the reliability order of the calculated LLR values is searched (step ST6). Then, the decoded words are added to a list (step ST7).
Then, the processes described above are repeated until after the repeated decoding stopping conditions N1 and N2 are satisfied (steps ST8 and ST9).
Then, one of the deccded words is selected (step ST10).