1. Field of the Invention
The present invention relates to a decoder device performing iteration decoding and a decoding method.
2. Description of the Related Art
There are a turbo code and a LDPC (Low density Parity Check) code that are error correction codes used in data transfer. The turbo code is explained as an example. FIG. 1 is a configuration example of a turbo encoder 200 (e.g., 3GPP Technical Specification TS25.212). FIG. 2 is a diagram showing a configuration example of a turbo decoder 400 (e.g., C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes (1),” in proc. of the IEEE int. conf. on Communications (ICC'93), Geneva, 1993, pp. 1064-1070).
The turbo encoder 200 includes, as shown in FIG. 1, a first element encoder 210, a second element encoder 230 and an interleaver 220. Information bits are input to the turbo encoder 200 and are encoded by the first element encoder 210 to generate first parity bits. Also, the input information bits are input to the interleaver 220 and are interleaved by the interleaver 220. The interleaved bits are input to the second element encoder 230 and are encoded to generate second parity bits. Further, the information bits themselves are output as systematic bits. Then, for example, these three kinds of bits are combined to form serial bits which are modulated for transmission.
The turbo decoder 400 includes, as shown in FIG. 2, a first element decoder 410, a second element decoder 420, an interleaver 411 and a deinterleaver 421. A demodulated receiving likelihood data is input to the turbo decoder 400 (ys corresponding to the systematic bits, yp 1 and yp 2 respectively corresponding to the two parity bits). The likelihood ys corresponding to the systematic bits, the likelihood yp 1 corresponding to the first parity bits and an external information L′e (2) are input to the first element decoder 410, and an external information Le (1) is output. The output Le (1) and the likelihood ys corresponding to the systematic bits are input to the interleaver 411, and then output respectively after the bits are interleaved. The reason why there is the interleaver 411 is that the second parity bit is interleaved by the interleaver 220 at the time of decoding, which has to correspond to the processing at the time of encoding. The likelihood ys corresponding to the systematic bits whose bits are interleaved, the external information L′e (1) and the likelihood ys 2 corresponding to the second parity bits are input to the second element decoder 420, and an external information Le (2) is output. The external information Le (2) is interleaved to the likelihood of an original string by the deinterleaver 421, and is output as the external information L′e (2). By repeating this process, a decoding bit uj is output from the deinterleaver 421. An error rate characteristic of the decoding bit uj is improved because of this iterative process.
FIG. 3 is a diagram showing a configuration example of the first element decoder 410, the second element decoder 420 of the turbo decoder 400 and the surrounding part (e.g., J. Vogt and A. Finger, “Improving the max-log-MAP turbo decoder,” Electronics Letters 9 Nov. 2000, Vol. 36, No. 23, pp. 1937-1939). Each of the element decoders 410 and 420 includes a MAP computing unit 415 and adding units 416 and 417. In the MAP computing unit 415, a posteriori probability is generated by a MAP (Maximum a Posteriori) calculation. In view of circuit size, the likelihood obtained by taking a log of the value of the probability is input to the MAP computing unit 415 as shown for a log-MAP calculation. In the adding unit 417, a prior (probability) likelihood is reduced from the generated posteriori likelihood (ys+L′e) and the external information (return posteriori) is output.
On the other hand, there is a problem that, in general, the amount of the log-MAP calculation of the MAP computing unit 415 is large. Therefore, conventionally, there has been a calculation method called a Max-log-MAP calculation approaching the log-MAP calculation. The Max-log-MAP calculation is a method approximating a “sum of exponent functions of likelihood”, the most major calculation element, by the maximum value of each section of the sum. There was a problem that although such calculation could significantly reduce the amount of the processing for the Max-log-MAP calculation, the error rate characteristic was degraded.
Conventionally, as shown in FIG. 3, for return likelihood (external information) after the log-MAP calculation, a coefficient scaling unit 430 has a method for multiplying a number which is smaller than “1” (e.g., J. Vogt and A. Finger, “Improving the max-log-MAP, turbo decoder,” Electronics Letters 9 Nov. 2000, Vol. 36, No. 23, pp. 1937-1939. Hereinafter, this method is referred to as “coefficient scaling”.) It is known that the error rate characteristic is improved by this method. In general, a scaling value “0.75” is used in view of a circuit implementation.
Also, there has been a method called “indexation” as a method for improving the error rate characteristic for the log-MAP calculation (e.g., J. Vogt, J. Ertel and A. Finger, “Reducing bit width of extrinsic memory in turbo decoder realisations,” Electronics Letters 28 Nov. 2000, Vol. 36, No. 20, pp. 1937-1939 and D. Garrett, B. Xu, and C. Nicol, “Energy Efficient Turbo Decoding for 3G Mobile,” in Int. Symp. on Low Power Electronics and Design, 2001 pp. 328-333). FIG. 4 is a diagram showing a configuration example of the method. An output side of the element decoders 410 and 420 (the return likelihood (external information) after the log-MAP calculation) is provided with an indexing unit 440, a memory storage unit 441 and a restoring unit 442.
FIG. 5 is a diagram showing an example of the processing performed from the exponent position determining unit 440 to the restoring unit 442. FIG. 6 is a diagram showing a detailed configuration example of processing performed from the exponent position determining unit 440 to the restoring unit 442. The “indexation” obtains a bit position (hereinafter referred to as an “exponent”) where a bit other than a sign bit is found first in the likelihood by searching the bits from the sign bit to the end bit (exponent=1) in order to scale down the number of bits N per data, and this exponent is stored in the memory 441, wherein, for example, positive or negative two's complement numbers are applied. Then, the exponent is read to be restored by the restoring unit 442.
For example, as shown in FIG. 5, when “209” in a decimal number is output as the external information (return likelihood), an exponent “8” is obtained in exponent position determining unit 440. Then, the value “8” is stored in the memory 441. Later, the restoring unit 442 restores the external information (restoring value) “0010000000” (“128” in a decimal number) from the storing value which is read from the memory 441. At this time, the output is “128” for the input “209”. The value which is “0.612” times as much as the input is output for the input. In this way, the calculation by “indexation” can reduce the number of bits N per data.
There is the following problem of the calculation by “coefficient scaling” and “indexation” for the return likelihood (external information) computed by the MAX-log-MAP calculation.
In “coefficient scaling”, the scale down rate per data is constant because a certain coefficient value is multiplied. However, “indexation” obtains the value of the rate, which varies according to the value of the return likelihood. Therefore, although the error rate characteristic of “coefficient scaling” is improved more, a buffer size for storing the return likelihood of the “coefficient scaling” is larger than that of “indexation”.
Also, coefficient values used for “coefficient scaling” and “indexation” depend on the encoding rate and the modulation method. For example, if encoding is performed by a modulation method such as BPSK or QPSK when the encoding rate=⅓, when “coefficient scaling” and “indexation” using the coefficient value “0.75” are applied to the other modulation method (particularly, a high encoding rate such as 0.8 and the like) and another modulation method (16QAM or the like), degradation of the error rate characteristic occurs.
Further, in “indexation”, there may be a waste of memory when information is not assigned to the exponent. For example, for the numeric value of the bit number N per data (“10” in the above-described example), a number of bits n necessary for storing the exponent is n=[log 2 (N)] (n=[3,3 . . . ]=4 in the above-described example), and the information is not assigned to the numeric value of a number of k=2n□N (k=“6” in the above described example) for positive and negative, respectively.
Further, there is a problem that if the number of bits of the prior likelihood, which is the data input to the turbo decoder 400, is large, the size of a storage buffer necessary for the calculation becomes larger.
Further, as a result of “indexation”, the value of the external information can be reduced and the error rate characteristic can be improved. However, the value is different from the actual value because the value is an approximate value. Degradation of the characteristic occurs due to the approximation such as in case a level of the average numeric value is kept unchanged.