The LDPC code is a code based on blocks. The encoder processes blocks of K bits and delivers blocks of N bits. Thus, N-K redundancy bits are added. These N-K bits are called “parity bits”. The coding rate (or “Code rate”) is defined by the ratio K/N. The lower the coding rate, the higher the number of redundancy bits and thus the greater the protection against noise of the transmission channel.
These N-K parity bits are calculated with the aid of a parity matrix H. The LDPC code is therefore also a code based on a matrix. This matrix has N-K rows and N columns and is composed of “1s” and “0s” with a low number of “1s” relative to the number of “0s”. This is why codes of this type based on such a matrix are dubbed “LDPC codes”, that is to say low-density codes. The encoded block BLC, of N bits, is calculated by solving the equation H·BLCT=0, where H denotes the parity matrix, and T the “transpose” function.
On the decoder side, correction of erroneous bits is performed on the basis of the relations between the coded information of the block received. These relations are given by the parity matrix H. The decoder uses internal metrics corresponding to the “1s” of the matrix H. The matrix H corresponds to the Tanner graph of the LDPC code comprising so-called check nodes and information nodes (or “bit nodes”) linked together by paths of the graph that are representative of the messages iteratively exchanged between the nodes thus linked. These metrics are updated row-wise (updating of the check nodes), taking into account the internal metrics of each row. Thereafter, the decoder updates these metrics column-wise (updating of the information nodes), taking into account the internal metrics in each column as well as the corresponding information input to the decoder and originating from the transmission channel. An iteration corresponds to the updating of the check nodes for all the internal metrics, followed by the updating of the information nodes for all the internal metrics. The decoding of a block requires several iterations.
The values of the decoded bits, also called “hard decisions,” are obtained by adding together the internal metrics column-wise and the information received and by taking the sign of the result. The result is also called a “soft decision” or LLR (Log Likelihood Ratio). The sign of the result provides the value “0” or “1” of the bit while the absolute value of the result gives a confidence indication (probability) for this “0” or “1” logic value.
Codes of the LDPC type are beneficial since they make it possible to obtain a very low bit error rate (BER), because of the iterative character of the decoding algorithm. Several iterative decoding algorithms exist for decoding LDPC codes. Mention may in particular be made of the conventional so-called “belief propagation” (BP) algorithm, well known to the person skilled in the art.
For moderate sizes of code words, one of the most effective solutions for obtaining good error performance is to use LDPC codes constructed on non-binary Galois fields of order q, denoted GF(q). The Galois field is non-binary when q is greater than 2. Non-binary LDPC codes represent a more general class of LDPC codes considering sets of bits in a block instead of independent bits. Each set of bits forms a chosen symbol in a Galois field. For example, symbols composed of 8 bits can be included in Galois fields GF(256) comprising 256 symbols, instead of two symbols “0” or “1” in a binary code.
An LDPC code in a Galois field GF(q) is defined by a sparse parity matrix H whose nonzero elements belong to GF(q). However, the associated decoders, without particular simplification, have the disadvantage of having a complexity that varies as O(q2). Consequently, no field of order larger than q=16 can be considered for a “hardware” implementation using technology commonly available today.
The updating of a check node is an operation which takes a significant amount of the decoding time. It can be considered to be a function applied to the Dc messages corresponding to the nonzero symbols of a row of the parity matrix H. The updating of a check node updates these Dc messages.
If F denotes the function making it possible to calculate the updating of the messages, with Ei an original message, the updated message E′k is defined by:E′k=F(Ei)∀iε[0,Dc−1],i≠k  (1)
Equation (1) implies that the updating of each message requires the other Dc-1 messages. In practice, a recursive application of an elementary function denoted G is used to calculate the Dc updated messages. We then have:E′k=G(E0,G(E1, . . . G(Ei, . . . G(EDc-2,EDc-1) . . . )))  (2)
Equation (2) shows that the function G is applied in a recursive manner to the input messages. Thus, to minimize the global latency time, the latency time of G must be minimal. The function G corresponds to an elementary update operation.
An elementary update receives two input messages and produces an output message possessing the same format as the input messages. In the case where a simplified decoding algorithm of Extended Min-Sum (EMS) type is used, mentioned in the document “Low-complexity, low-memory EMS algorithm for non-binary LDPC codes” published in the IEEE journal of 24-28 Jun. 2007 (Institute of electrical and electronic engineers), each message is composed of two vectors of nm values, nm<<q. A first vector comprises the nm metrics sorted in decreasing order, these metrics possibly being values comprising a sign and coded on Nllr bits, and a second vector comprises the nm symbols belonging to the Galois field GF(q) that are associated with the metrics of the first vector, the symbols being coded on log 2(q) bits, i.e. for example 8 bits for a decoder operating on the Galois field GF(256), and sorted in the same order as the associated metrics.
Stated otherwise, it may thus be considered that each message comprises nm doublets, a doublet comprising a symbol and the metric associated with the symbol. The nm doublets of each message are ranked in decreasing order as a function of the metrics.
The objective of an elementary update operation is to obtain the nm most probable symbols, that is to say the nm symbols associated with the largest metrics. There are theoretically nm2 candidates given that as input there are two messages each comprising nm doublets. A square matrix of dimension nm2 is thus obtained, arising from the combination of the two messages, comprising nm2 doublets. Each doublet of the square matrix comprises a combination of the two symbols and the combination of the associated metrics. The combination of two metrics corresponds to their algebraic sum and the combination of the associated symbols corresponds to the sum in the set GF(q) of the symbols, that is to say an XOR operation performed bit by bit. The elementary update is aimed at exploring the square matrix of dimension nm2 and at extracting the nm most probable symbols, that is to say the nm symbols associated with the nm largest metrics.
As the messages are already sorted, for example, in decreasing order as a function of the metrics, the combination of two symbols possessing the largest sum of metrics is always the value situated in the top left corner of the matrix, i.e. S00=U0+V0, with S the matrix resulting from combining the two messages as input U and V. Thereafter, the following combination possessing the largest sum of metrics can correspond to the combination (U0+V1) or (U1+V0). At each step, the combination in the Galois field GF(q) of the symbols associated with the metrics is effected at the same time as the sum of the metrics.
The number of steps necessary for finding the nm combinations of symbols that are most probable, that is to say associated with the nm largest sums of metrics, is generally larger than the number nm. Indeed, to be validated, a symbol arising from the combination of two input symbols must be different from the already calculated symbols.
The updating of a check node is a very important step of a non-binary LDPC decoding which takes a great deal of time, and which comprises a plurality of elementary updates.
The document “Low-complexity, low-memory EMS algorithm for non-binary LDPC codes” mentioned above describes a method making it possible to reduce the number of operations necessary for updating the check nodes while retaining performance close to decoding based on belief propagation (BP). However, the method described in that document is a method operating serially in the sense that an elementary update produces at most one doublet out of the nm of the output vector per clock period. The major problem of a serial implementation of an updating of a check node resides in its low data throughput.
It is possible to implement a plurality of serial processors working in parallel. However, at high working frequency and for small information blocks, the maximum parallelism may be very low retaining a limited maximum data throughput.
In the case of a parallel implementation making it possible to carry out an elementary update operation, it is not necessary to explore the nm2 doublets of the matrix. Since this matrix is devised on the basis of vectors sorted in decreasing order of metrics, the number of doublets of the matrix having a possibility of being in the output vector may be considerably reduced.
The potential doublets of the matrix form a pattern as illustrated by black points in FIG. 1. The example presented in FIG. 1 is given for nm=10. The total number of potential doublets may be limited to 27 in the case where nm=10. The general formula making it possible to calculate the number of potential doublets depends on the number nm and is given by:
                    N        =                              2            ·                          (                                                ∑                                      k                    =                    1                                                        k                    ⁢                                                                                  ⁢                    max                                                  ⁢                                  ⌊                                                            n                      m                                        k                                    ⌋                                            )                                -                      k            ⁢                                                  ⁢                          max              2                                                          (        3        )            
With kmax2 the largest square of an integer, the square of the integer being smaller than nm, for example kmax2=9 in the case where nm=10, and └x┘ the value rounded down to the integer below. It should be noted that the chosen number of potential doublets may be larger than that given by equation (3), so as to compensate for the problem of redundant symbols.
However, despite this reduction in the number of potential doublets, the number to be sorted remains large. Indeed for conventional values of nm=12 or nm=16, 35 or 50 values, respectively, must be sorted in parallel, thus requiring large parallel sorters and a low throughput.
Moreover, as regards the deletion of the redundant symbols, the most direct technique consists of comparing the symbols of each sorted potential doublet with the symbols of the other potential doublets and in deleting the doublets comprising identical symbols. Carrying out such an operation in parallel requires numerous comparators and multiplexers and considerably increases the block decoding time.