Numerous encoding techniques for error correction are already known. The first studies on the subject go back to the 1940s. For it was then that Shannon laid the basis of information theory which is still in use today.
Numerous encoding families were then proposed. The current state-of-the-art error correction codes are well represented by the most recent turbo codes and the LDPC codes.
Turbo-codes were invented in 1991 by Berrou and Glavieux [3] (the references cited are grouped together at the end of the description, in paragraph 9).
As illustrated in FIG. 1, the turbo-codes encode (i.e. compute a block of redundancy bits Y1) an information block (X-bit block) for a first time by means of a convolutive encoding 11 described by a trellis having a small number of states (generally 8 to 16), then permutate or interlace the information block in another order (12) to encode them once again (13) to give the block of redundancy bits Y2.
The transmitted encoded block is therefore formed by X, Y1 and Y2, or even other permutated blocks 14 or encoded blocks 15 of additional redundancies Yi.
The publication of turbo-codes and the discovery of their performance characteristics for low decoding complexity compatible with the technology of electronic chips for large-scale consumer applications in the 1990s has given rise to numerous articles on error correction codes and their soft-decision iterative decoding. It became finally possible to approach Shannon's limit published in 1948 to transmit information at a bit rate approaching the maximum borderline capacity of the channel used, whether on an electric wire link, an optical wire link or a radio link.
This renewal of the field of information theory led to the rediscovery, in 1995, of the LDPC (“Low-Density Parity Check”) invented by Gallager [1] in 1960, and work by Tanner [2] generalizing these codes in 1981, and then by the publication of a variant of the LDPC codes: the RA (“Repeat-Accumulate”) codes [4] by Divsalar and McEliece in 1998.
LDPC codes are defined by a parity check matrix H that is sparse, i.e. comprises very few 1s and many 0s (for binary codes). For non-binary LDPC codes such as, for example, those using the quaternary alphabet such as the ring Z4 of integers modulo 4: {0, 1, 2, 3}, the control matrix will have many 0s and very few non-zero symbols {1,2,3}.
For reasons of simplicity of definition of the matrix, the initial binary LDPC or “regular LDPCs” have been defined by a sparse parity check matrix having numbers of 1 dc per row and dv per column as in the example of an LDPC code (4,2) here below, with parameters [n=12,k=8,dmin=2] (i.e. a code of length 12 with a check matrix having 4 “1s” on each row and 2 “1s” on each column):
  H  =      [                            1                          1                          1                          1                          0                          0                          0                          0                          0                          0                          0                          0                                      0                          0                          1                          1                          1                          1                          0                          0                          0                          0                          0                          0                                      0                          0                          0                          0                          1                          1                          1                          1                          0                          0                          0                          0                                      0                          0                          0                          0                          0                          0                          1                          1                          1                          1                          0                          0                                      0                          0                          0                          0                          0                          0                          0                          0                          1                          1                          1                          1                                      1                          1                          0                          0                          0                          0                          0                          0                          0                          0                          1                          1                      ]  
Another representation of the LDPC codes is their bipartite Tanner graph which represents the variables by nodes (or vertices) placed to the left in a graph and the check equations (or constraints) by XOR values represented by nodes placed to the right in this graph and linked to the binary variables by arms (or ridges).
FIG. 2 presents the bipartite Tanner graph corresponding to the above matrix. The vertices of the 12 variables xi with i=0, 1, . . . , 11 are represented by black dots 21. The 6 constraints 22 (modulo sum 2) are placed to the right and referenced Cj for j=0, 1, . . . , 5. The permutations 23 are illustrated by arms interconnecting the variables and the constraints.
It can be seen that the number of 1s equal to 4 on one row of the check matrix H of the code is also the number of inputs of each XOR constraint. This number is also called degree of the constraint or of the local code and is referenced dc. Similarly, the number of 1s equal to 2 on a column of the check matrix H of the code is also the number of repetitions of each variable. This number is also called degree of the variable and is referenced dv. The rate of the overall code r=k/n has a lower limit:
  r  =            k      n        ≥          1      -                                    d            v                                d            c                          .            
In general, the degrees may vary from one variable to another and the codes may be different. Should the XOR constraints be replaced by small codes known as “local” codes, the overall code is called a Tanner code.
FIG. 3 shows the architecture of the Tanner codes which is more general than that of the LDPC codes. The codes Ci 31 may be codes that are far more complex than a simple parity code obtained by an XOR operation as in the LDPC codes.
The turbo-codes, the LDPC codes and their variants provide performance characteristics, in terms of error correction, that are noteworthy for large block sizes of at least some thousands or tens of thousands of information bits but for a high complexity of computation when decoding which, however, remains compatible with the constantly increasing computation capacities of present-day microprocessors.
However, a major reduction in decoding complexity is greatly desired by manufacturers of components implementing these error correction encoding-decoding functions, because it would reduce the silicon surface area of the electronic chips that implement these functions and therefore their production costs, ultimately providing for a lower final cost for the consumer.
This reduction of complexity is also desired by all consumers because it also results in lower consumption of electrical power given, for example, by mobile telephone batteries or laptops connected to mobile radio telecommunications networks and therefore results in a greater autonomy of the portable terminal or in a lighter terminal.
The usual turbo-codes have a minimum distance at best logarithmic in length and the LDPC which approach the capacity of the channel are also at best logarithmic in length of the code:
            d      min              n      ->      ∞        ∝            Log      ⁡              (        n        )              .  
A family of the code is said to be asymptotically good (AG) if the minimal distance of the codes increases linearly as the length of the code:
            “      AG      ”        ⇔                  lim                  n          ->          ∞                    ⁢                        d          min                n              =      Ct    >    0.  
The performance of known, present-day codes is therefore not optimum and can still be improved, in terms of both error correction capacity and reducing complexity of decoding.
Furthermore, the known structures of present-day codes show excessively low performance in terms of error correction for small block sizes, of the order of about a hundred to a thousand bits. The very great prevailing demand for the small-packet digital communications transmissions is the cause of interest in these small-length codes.
An increase in performance in terms of binary error rate may result especially in increasing the quality of service provided to the user:                improved range of base stations;        less noisy data transmission;        higher maximum information throughput rate available;        greater number of simultaneous users in a same zone covered by a base station.        