(a) Field of the Invention
The present invention relates to an apparatus for encoding and decoding of LDPC (Low-Density Parity-Check) codes, and a method thereof. More specifically, the present invention relates to an apparatus for encoding and decoding of LDPC codes that enables a design of encoding and decoding, and a method thereof.
(b) Description of the Related Art
An LDPC code is a linear block code invented by Gallager in 1962 and defined as a sparse parity-check matrix in which most of the elements are zero. Namely, the sparse matrix is one in which most of the elements have a value of zero.
The LDPC code was almost forgotten since the expense of its implementation was too high at that time. Recently, it was rediscovered in 1995 and further improved as an irregular LDPC code by generalization in 1998.
A statistic decoding algorithm for the LDPC code was also invented at the time of Gallager's first discovery of the LDPC code. The performance of the LDPC code decoded by the algorithm is remarkably high and improved by expansion of a codeword from binary codes to non-binary codes.
Like a turbo code, the LDPC code has a bit error rate (BER) very close to the Shannon channel capacity limit. An irregular LDPC code known to have the highest performance needs only 0.13 dB from the Shannon channel capacity to get a bit error rate (BER) of 10−6 when its code length is about one million (106) bits in the additive white Gaussian noise (AWGN) channel environment. For that reason, the irregular LDPC code is suitable for applications that require a high-quality transmission environment having a considerably low bit error rate (BER).
Unlike the algebraic decoding algorithm that is a decoding method for normal block codes, the LDPC code decoding algorithm is a probabilistic decoding method that adopts a belief propagation algorithm based on the graph theory and the statistic prediction theory.
An LDPC decoder calculates the probability that each bit in a codeword received through a channel is equal to 1 or 0. The probability information calculated by the decoder is called the “message,” and the quality of the message is checked for each parity defined in a parity-check matrix. The calculated message when a specific parity of the parity-check matrix is satisfied is called the “parity-check message,” which indicates the most likely value for each codeword bit. The parity-check message for each parity is used to determine the value of the corresponding bits, and the calculated bit information is called the “bit message.”
In this iterative message-passing process, the bit information of each codeword satisfies all the parities in the parity-check matrix. The codeword decoding process ends at the time when all the parities in the parity-check matrix are satisfied. The systematic codes are commonly used in the channel environment having a low signal-to-noise ratio to extract a specific part of the codeword and regenerate information bits. The term “systematic code” as used herein refers to a code made to provide an information word as a part of the codeword. Namely, the systematic code is a code to definitely divide a codeword into an information word part and an additional part provided for error correction.
Generally, codes suitable for graph decoding are LDPC codes, and decoding algorithms for decoding the LDPC codes include a sum-product algorithm and a min-sum algorithm. The sum-product algorithm for decoding the LDPC codes exhibits an optimal performance when the Tanner graph has no cycle. The min-sum algorithm is less complex than the sum-product algorithm but inferior in performance. A structure designed for correction of all losses during the decoding process to solve this problem is too complex to implement hardware or software.
Korean Patent Application No. 2001-33936 (filed on Jun. 15, 2001) discloses an invention under the title of “Graph Decoding Method Using Block Correction”, which relates to a graph decoding method using block correction by adding a correction factor to the min-sum algorithm for decoding LDPC codes to achieve a decoding performance close to that of the sum-product algorithm.
More specifically, the cited invention provides a graph decoding method using partial correction by adding a correction factor in consideration of a combination of K sites for the most inputs of a min operation causing a performance deterioration in the min-sum algorithm, i.e., a combination of K sites most frequent in the min-sum algorithm, to achieve a performance approaching that of the sum-product algorithm without increasing the complexity of the min-sum algorithm too much.
According to this cited invention, a correction device is applied to the most effecting codeword to improve the algorithm, guaranteeing a performance equivalent to that of the original decoding algorithm with relatively less hardware.
Korean Patent Application No. 2001-50423 (filed on Aug. 21, 2001) by the applicant of the present invention discloses an invention under the title of “Apparatus for Adaptively Determining Maximum Number of Decoding Iterations for LDPC Decoder Using Signal-to-Noise Ratio Estimation, Method thereof, LDPC Decoding Apparatus Including the Apparatus, and Method thereof.”
More specifically, the apparatus for adaptively determining the maximum number of decoding iterations for an LDPC decoder according to the cited invention estimates a signal-to-noise ratio corresponding to a received LDPC encoded signal and adaptively determines the maximum number of decoding iterations corresponding to the estimated signal-to-noise ratio based on a memory storing maximum numbers of decoding iterations corresponding to various signal-to-noise ratios.
According to the cited invention, a signal-to-noise ratio corresponding to a received signal is estimated to adaptively determine the maximum number of decoding iterations to satisfy a required performance. This reduces the average number of decoding iterations and hence a delay of the signal but increases the number of calculations.
U.S. Pat. No. 6,633,856 (filed on Oct. 10, 2001) discloses an invention under the title of “Method and Apparatus for Decoding LDPC codes,” which relates to a method and apparatus for decoding a codeword using LDPC codes and a message-passing decoding algorithm used for long codes.
More specifically, the cited invention is directed to a method for encoding a codeword having a large graph comprising a plurality of small graphs of the same size, in which method the large graph is configured from a plurality of small graphs using an algorithm for substitution of columns in a matrix. The column substitution algorithm can be implemented with a message-passing function among the small graphs. The messages corresponding to the small graphs are collectively stored in one memory and written in or read out of the memory by a SIMD write/read command. The graph substitution operation can be configured by a simple message rearrangement command, and the substitution command can also be used for a cyclic substitution. Therefore, a message set read out of one message set memory is rearranged in sequence by cyclic substitution and passed to a processor circuit for a small graph to be processed in the next time.
This cited invention includes a function of efficient storing and reading of a memory for parallel processing of a decoder, thereby simplifying an implementation structure of the decoder with an enhanced high speed.
The LDPC codes are generated using a basically random designing configuration method, so all the information about the random parity-check matrix should be stored in a memory so as to configure an encoder or a decoder for the LDPC codes. This means that all the locations of nonzero elements on the parity-check matrix must be stored. However, the number of 1s increases with an increase in the size of the parity-check matrix for the codes, greatly increasing the value of information to be stored.
In addition, the random characteristic of the parity-check matrix more complicates an address searching and a read/write operation of information for a memory in the encoder/decoder, and also increases the number of factors to be considered in the code generating process, making it difficult to generate codes for high performance.