The present invention relates to digital transmission, in which block coding is used to correct transmission errors.
On the subject of digital transmission of information (speech, image, data, etc.), a distinction is usually made between source coding and channel coding. Source coding forms the binary representation of the signal to be transmitted. It is normally designed as a function of the nature of the signal to be transmitted. Much effort has been expended in recent years on the subject of source coding in order to reduce the digital rate, while preserving good transmission quality. However, these new source coding techniques require better protection of the bits against perturbations during the transmission. Moreover, the physical and economic limitations of high-frequency components (noise factor, power saturation), as well as regulations on the level of power allowed for transmission limit the range of digital transmission systems.
For this reason much work has been carried out on the subject of channel coding, in particular on the subject of block coding. This type of error-correcting coding consists in adding n-k redundancy bits to k information bits originating from the source coding, and in using these redundancy bits on reception in order to correct certain transmission errors. The ratio R=k/n is called the efficiency of the code, and the coding gain G is defined as the ratio, expressed in decibels, between the energies per information bit Eb which are necessary at the input of the receiver without coding and with coding in order to reach a given binary error rate (BER). A typical objective is to design coders and more particularly the associated decoders so that: (i) the coding gain G is as high as possible (G&gt;5 dB for BER=10.sup.-5), (ii) the efficiency of the code is as high as possible (R&gt;0.6), and (iii) the decoding complexity is as low as possible.
The case of the storage of the digital information may be seen as a particular case of transmission, in which the propagation channel includes a memory where the information remains in more or less long-term storage, the transmitter and the receiver being the same or not. It will thus be understood that, in general, the notions of channel coding and of associated decoding are applicable to the field of the storage of information in the same way as to transmission, the errors to be corrected then being those due to the reading or to the writing in the memory, to the alteration in the content of the memory or also to communications (remote or not) with the devices for reading and writing in the memory.
It is known to enhance the performance of error-correcting codes by using concatenation techniques. In particular, the technique of product codes, which is more particularly involved with the present invention, makes it possible, from two simple block codes (that is to say having a small minimum Hamming distance d) to obtain a code whose minimum Hamming distance is equal to the product of the Hamming distances of the elementary codes used.
If a block code with parameters (n.sub.1,k.sub.1,d.sub.1) is designated by C.sub.1 and a block code with parameters (n.sub.2,k.sub.2,d.sub.2) is designated by C.sub.2, the application of the code which is the product of C.sub.1 with C.sub.2 consists in ordering the k.sub.1 .times.k.sub.2 successive information bits in a matrix, and in coding the k.sub.1 rows of the matrix by the code C.sub.2, then the n.sub.2 columns of the resultant matrix by the code C.sub.1. The parameters of the product code P are then given by (n=n.sub.1 .times.n.sub.2 ; k=k.sub.1 .times.k.sub.2 ; d=d.sub.1 .times.d.sub.2). The efficiency R of the code P is equal to R.sub.1 .times.R.sub.2. Decoding of the code P according to the maximum likelihood a posteriori (MLP) makes it possible to reach optimal performance. The maximum asymptotic coding gain can then be approximated by the relation G&lt;10 log.sub.10 (R.multidot.d).
The product code is thus very beneficial, but decoding according to the MLP is generally too complex, except in the case of short block codes.
In the article "On decoding iterated codes", IEEE Trans. on Information theory, Vol. IT-16, No. 5, September 1970, pages 624-627, S. M. Reddy proposes an algorithm for decoding a product code constructed from elementary codes which are decodable by an algebraic decoder, which can be summarized in three steps:
decoding the columns of the coded matrix by using an algebraic decoder, PA1 generating, for each column, an estimate of the reliability of the decoded bits based on the number of corrected bits, and PA1 decoding the rows by an algebraic decoder, by making use of the reliability determined during the decoding of the columns. PA1 decoding the columns by using the Bahl algorithm (see L. R. Bahl et al., "Optimal decoding of linear codes for minimizing symbol error rate", IEEE Trans. on Information Theory, Vol. IT-20, pages 248-287, March 1974) which estimates the logarithmic likelihood ratios (LLR) of the bits, PA1 decoding the rows by using the Bahl algorithm and by taking, as input data, the likelihoods (LLR) calculated during the decoding of the columns, and PA1 recommencing the decoding of the columns with, as input data, the likelihoods (LLR) calculated during the decoding of the lines. PA1 determining a number p of indices for which the components of the data vector are the least reliable; PA1 constructing a number q of binary words to be decoded from the said p indices and from the decision vector; PA1 obtaining q' code words on the basis of algebraic decodings of the decision vector and of the q binary words to be decoded; PA1 selecting, among the q' code words obtained, the one having the smallest euclidean distance from the data vector; PA1 calculating a correction vector, each component W.sub.j of the correction vector being calculated respectively by determining a possible concurrent word having its j-th component different from that of the selected code word, and by applying the formula: ##EQU1## when a concurrent word has been determined, M.sup.d and M.sup.c respectively designating the euclidean distances, with respect to the data vector, of the selected code word and of the concurrent word, and C.sub.j.sup.d and R'.sub.j respectively designating the j-th components of the selected code word and of the data vector; PA1 obtaining the new decision vector taken to be equal to the said selected code word; and PA1 calculating the new data vector by adding the correction vector multiplied by a first confidence coefficient to the corresponding input vector extracted from the input matrix.
This decoding algorithm is sub-optimal with respect to the MLP, and does not make it possible to make full use of all the resources of the product code.
In their article "Separable MAP filters for the decoding of product and concatenated codes", Proc. ICC'93, Geneva, pages 1740-1745, May 1993, J. Lodge et al. proposed an iterative decoding algorithm comprising the following steps:
The decoding of the columns is reiterated several times followed by the decoding of the rows. This algorithm, although it leads to performance superior to that of the Reddy algorithm, is applicable only for short-length codes, for example the Hamming code (16,11,3). This is due to the fact that the Bahl algorithm uses the trellis associated with the block codes, which grows exponentially as a function of n-k. This algorithm can thus not be used in practice for high-efficiency codes such as, for example, the BCH code (63,51,5).
An object of the present invention is to propose a method of transmitting information bits involving a mode of decoding product codes which is well adapted to the case of high-efficiency codes.