1. Field of the Invention
The present invention concerns in general terms a method of decoding turbocoded information. More precisely it concerns an improvement to the decoding method when the latter exhibits a lack of convergence.
2. Discussion of the Background
Turbocodes currently constitute the most efficient error correcting codes since, amongst existing codes, they make it possible to obtain the lowest bit error rates for a given signal to noise ratio, and this with a reasonable decoding complexity. They can be used either for continuous digital transmissions or for transmissions by frames.
Turbocodes were introduced by C. Berrou, A. Glavieux and P. Thitimajshima in an article entitled “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-codes” which appeared in ICC-1993 Conference Proceedings, pages 1064-1070. Turbocodes have subsequently been the subject of many developments and today the term turbocodes is given to a class of codes based on two concepts:
The first concept is the concatenation of several simple codes, referred to as elementary codes, separated by interleaving steps, modifying the order in which the data are taken into account by these elementary codes. The elementary codes can be of different types: recursive systematic codes (denoted RSC) for convolutional turbocodes or block codes such as Hamming codes, RS codes or BCH codes for block turbocodes. Different types of concatenation can be envisaged. In parallel concatenation, the same information is coded separately for each coder after having been interleaved. In serial concatenation, the output of each coder is coded by the following coder after having been interleaved. The dimension of the turbocode means the number of elementary coders used for implementing the turbocode. The interleavings used can be of the uniform type, for example by entering the data to be interleaved row by row in a matrix and retrieving them column by column, this type of interleaving notably being employed in block turbocodes. In general, in order to improve performance, the turbocodes use non-uniform interleavings. This is the case notably with convolutional turbocodes.
The second concept is the iterative decoding of the turbocode, also referred to as turbodecoding. Each iteration of the decoding consists of the concatenation of several elementary decoding operations. The elementary decoders used for this purpose are of the weighted input and output type and each correspond to an elementary coder of the turbocoder. The weighted inputs and outputs of an elementary decoder translate the probabilities of the binary or m-ary data of the inputs respectively input to and output from the corresponding elementary coder. The weighted inputs and outputs can be labelled in terms of probabilities, likelihood ratios or log likelihood ratios (also denoted LLRs).
According to the scheme of the turbodecoder, the elementary decoders act one after the other (so-called serial turbodecoding) or simultaneously (so-called parallel turbodecoding). Naturally hybrid decoding schemes can also be envisaged. Interleaving and deinterleaving operations occur according to the deinterleaving and interleaving operations performed at the time of coding. They enable each elementary decoder to take into account information presented in the same order as at the input and output of the corresponding elementary coder, each elementary decoder thus using information corresponding to the information input to and output from the corresponding elementary coder. The input information of an elementary decoder is so-called a priori information consisting of noisy information from the corresponding elementary coder. From this a priori information and knowing the coding law of the corresponding elementary coder, the elementary decoder generates a posteriori information, which is an estimation, with greater reliability, of the information input to and/or output from the corresponding elementary coder. The additional information afforded by the a posteriori information compared with the a priori information is referred to as extrinsic information.
Various algorithms can be used in elementary decoding operations, notably the so-called MAP (Maximum A Posteriori), Log MAP and MaxLogMAP algorithms, also referred to as APP, LogAPP and MaxLogAPP, which all derive from the calculation of a posteriori probabilities knowing the a priori probabilities. These algorithms are for example described in the article entitled “Optimal and sub-optimal maximum a posteriori algorithms suitable for turbo-decoding” by P. Robertson, P. Hoeher and E. Villebrun, which appeared in European Trans. On Telecomm., Vol 8, pages 119-125, March-April 1997. For block turbocodes, the Chase algorithm can be used, as described in the article entitled “Near optimum product codes” which appeared in Proc. IEEE Globecom of 1994, pages 339-343.
According to the type of turbocoding used, the extrinsic information issuing from an elementary decoder combined with the systematic information or directly the a posteriori information issuing from an elementary decoder will be used, after any interleaving or deinterleaving, as a priori information by the following elementary decoder within the same iteration or by the preceding elementary decoder within the following iteration.
Whatever the case, at each iteration, the information input to and output from the elementary decoders is more and more reliable. The information produced by the end decoding operation or operations of an iteration is used for generating output information which is an estimation of the input information of the coder. In principle, after a sufficient number of iterations, the decoding method stagnates and the algorithm converges. A thresholding is carried out on the output information from the last iteration in order to generate the turbodecoded sequence. Although suboptimal in principal, turbodecoding gives performance close to that of the optimal decoder in general, whilst nevertheless having appreciably lesser complexity since it is of the order of that of the decoder of the elementary codes.
Before dealing in more detail with the structure of a few turbodecoders, it is necessary to briefly state the structure of the corresponding turbocoders.
FIG. 1 illustrates a turbocoder of the so-called PCCC (Parallel Concatenated Convolutional Code) type with n dimensions. The coding device comprises a set of elementary coders (11i) concatenated in parallel and separated by interleavers (10i). Each of the elementary coders is of the recursive systematic convolutional type (denoted RSC). Each elementary coder codes an interleaved version of the useful input information. The outputs of the different elementary coders are multiplexed by a multiplexer (12). Only the systematic part (X) is transmitted only once for all the coders in non-interleaved form.
FIG. 2 illustrates a turbocoder of the so-called SCCC (Serially Concatenated Convolutional Code) type with n dimensions. The coding device comprises a set of elementary coders (21i) of the RSC type concatenated in series, two consecutive coders being separated by an interleaver (20i). Each coder introducing its own redundancy, the interleavers of increasing rank are of increasing size.
FIG. 3 illustrates a turbocoder of the so-called BTC (Block Turbo-Code) type. The coding device there too consists of a set of elementary coders (31i) concatenated in series, each elementary coder here being a block code: Hamming, RS or BCH, for example, and operating on one dimension of the block.
FIG. 4a illustrates a turbodecoder of the serial type for information coded by the PCCC turbocoder of FIG. 1.
The decoder comprises a set of elementary decoders concatenated in series, each elementary decoder (41i) corresponding to the elementary coder (11i) of the turbocoder.
In the example depicted, the elementary decoders use the LogAPP algorithm and have soft inputs and outputs in the form of log likelihood ratios (also denoted LLRs).
For reasons of clarity the interleavers and deinterleavers have not been shown. It goes without saying, however, that the input data of an elementary decoder must be presented in the same order as for the corresponding coder.
The decoding operation comprises a sequence of iterations 1 to k, each iteration consisting of an identical set of elementary decoding operations.
The input (e) of the decoder receives from the demodulator information in the form of weighted values which are a function of the respective probabilities of the symbols received.
The information received contains a part (X) corresponding to the systematic information and redundant parts (Yi) corresponding respectively to the information output from the elementary coders. A demultiplexer (40) provides the demultiplexing of the different parts of the information received. In addition to the information (Yi), each elementary decoder Di (41i) naturally receives the systematic information (X) suitably interleaved (input not shown for reasons of clarity) and extrinsic information ei-1 supplied by the previous decoder. At the first iteration, the extrinsic information from the first elementary decoder D1 is initialised to 0 and the a priori systematic information at the input of D1 is the received systematic part (X). D1 uses the first redundant information (Y1) to produce a new estimation of the systematic part, also referred to as a posteriori information. The difference between the a posteriori information and the a priori information is the extrinsic information generated by the decoder. This extrinsic information (suitably interleaved) is added to the systematic information (also suitably interleaved) in order to constitute the a priori systematic information of the following decoder. The process continues from decoder to decoder as far as Dn. The extrinsic information produced by the end elementary decoder Dn is transmitted (in fact retropropagated if a single set of elementary decoders is used) to D1 and a new complete decoding cycle is iterated. From iteration to iteration, the estimation of the systematic part gains in reliability and at the end of a number k of iterations the weighted values representing the systematic part (s) are subjected to a hard decision by means of the thresholding device (44). In the case where, for example, the weighted values are weighted bits, information represented by a sequence of bits is obtained at the output (S).
It goes without saying that other types of elementary decoder can be used. In particular, if an algorithm of the non-logarithmic type is used, the addition and subtraction operations are to be replaced by multiplication and division operations. The initial values of the extrinsic information must also be modified accordingly (1 for an APP algorithm, 0.5 for an algorithm evaluating the probabilities).
FIG. 4b illustrates a turbodecoder of the parallel type for information coded by the PCCC turbocoder of FIG. 1.
The decoder comprises a set of elementary decoders concatenated in parallel, each elementary decoder (41i) corresponding to the elementary coder (11i) of the turbocoder.
In the example depicted, the elementary decoders use the LogAPP algorithm and have weighted inputs and output in the form of log likelihood ratios. Here too, although the interleavers and deinterleavers have not been shown, the input data for the elementary decoder must be presented in the same order as for the corresponding coder.
The decoding operation comprises a sequence of iterations 1 to k, each iteration consisting of an identical set of elementary decoding operations.
The principle of the decoding is similar to that described for serial concatenation, the exchanges of extrinsic information taking place here in parallel between two successive iterations. Each elementary decoder Di (41i) also receives the redundant part (Yi), a suitably interleaved version of the systematic part and the extrinsic information from all the other decoders of the previous iteration. Each decoder in one and the same iteration works in parallel, produces a posteriori systematic information and deduces therefrom extrinsic information by difference between the a posteriori systematic information and the a priori systematic information. At the input of an elementary decoder Di the different items of extrinsic information ei with i≠j (suitably interleaved) are added to a suitably interleaved version of the systematic information X. The decoder uses the redundant information Yi to supply a new estimation of the systematic part or a posteriori systematic information.
The elementary decoders of the first iteration receive extrinsic information initialised to 0 (where the LogAPP algorithm is used).
The decoders of the last iteration each supply an estimation of the systematic information (si). The weighted values representing these estimations are, for example, added one by one (43) before a hard decision (44).
It will be understood that a serial-parallel hybrid decoding can be envisaged with different extrinsic information propagation modes. The decoded information output (S) results in all cases from a hard decision from estimations of the systematic parts supplied by the end elementary decoders of the last iteration.
FIG. 5 illustrates a turbodecoder corresponding to the SCCC turbocoder of FIG. 2.
The structure of this decoder was described in an article by S. Benedetto, G. Montorsi, D. Divsalar and F. Pollara entitled “Serial concatenation of interleaved codes: Performance analysis, design and iterative decoding”, published in JPL TDA Progr. Rep., vol. 42-126, August 1996.
The decoder comprises a set of elementary decoders concatenated in series, each elementary decoder Di (51i) corresponding to the elementary coder Ci (21i) of the turbocoder.
The decoding operation comprises a sequence of iterations 1 to k, each iteration consisting of an identical set of elementary decoding operations.
For reasons of clarity the interleavers and deinterleavers have not been shown. It goes without saying, however, that the input data of an elementary decoder must be presented in the same order as for the corresponding coder. In particular, two elementary decoders Di and Di+1 in one and the same iteration are separated by a deinterleaver corresponding to the interleaver (20i) separating the coders Ci and Ci+1. Likewise the output (Oc) of an elementary decoder Di+1 is interleaved by an interleaver identical to (20i) before being supplied to the decoder Di of the following iteration.
Each elementary decoder has two inputs Ic and Iu and two outputs Oc and Ou. The input Ic receives a priori information relating to data output from the coder Ci whilst the input Iu receives a priori information relating to data input to the said coder. Likewise, the output Oc supplies a posteriori information relating to data output from the coder Ci and the output Ou supplies a posteriori information relating to data input to the said coder. The a posteriori information supplied at Oc by an elementary decoder Di+1 is used as a priori information by the decoder D1 of the following iteration, enabling it to effect a more reliable estimation of the information input to and output from the corresponding coder Ci.
The elementary decoders of the first iteration and the end elementary decoder D1 of the last iteration receive a zero value at their input Iu, given that no a posteriori information from a previous iteration is available.
The output Ou of the end elementary decoder D1 of the last iteration supplies, in the form of weighted values, an estimation of the input information of the coder C1, that is to say of the useful information (X). These values are subjected to a hard decision by thresholding (54) in order to supply the decoded information (S).
FIG. 6 illustrates a turbodecoder corresponding to the BTC turbocoder of FIG. 3.
The decoder comprises a set of elementary decoders concatenated in series, each elementary decoder Di (61i) corresponding to the elementary coder Ci (31i) of the turbocoder.
The decoding operation comprises a sequence of iterations 1 to k, each iteration consisting of an identical set of elementary decoding operations.
The information to be decoded is presented as an n-dimensional block of weighted values supplied, for example, by the input demodulator. The order of the elementary decoders is of little importance, each working here on one orthogonal dimension of the block. The elementary decoders use, for example, the Chase algorithm mentioned above. Each elementary decoder receives the input block in its entirety and carries out an estimation of all the weighted values of the said block according to the coding dimension of the corresponding coder. This a posteriori information is deduced by difference (in the case of a decoder using a logarithmic algorithm) with the a priori information, an item of extrinsic information being presented in the form of a block of weighted values with the same size as the coded block. This extrinsic information is added to the input information in order to serve as a priori information for another decoder. Thus, by successive passes from one dimension to another and from one iteration to the following one, the estimation of the systematic part gains reliability. The weighted parts representing this estimation are then subjected to a hard decision by thresholding (64) in order to supply the decoded systematic information S.
Although the turbocodes produce performances close to the theoretical Shannon limit for large blocks of data, these performances deteriorate in certain configurations: small blocks of data, turbocodes with a high number of dimensions, or block turbocode used on non-Gaussian channels. The turbodecoding does not converge or converges towards a sub-optimal solution leading to erroneous decoded information.