1. Field of the Invention
The present invention relates to a turbo decoder, a turbo decoding method, and a turbo decoding program and more particularly to improvements of a high-performance and highly-reliable turbo decoding method to decode turbo codes to be used in communication systems and/or information processing systems in particular.
The present application claims priority of Japanese Patent Application No. 2004-219456 filed on Jul. 28, 2004, which is hereby incorporated by reference.
2. Description of the Related Art
In recent years, a turbo coding method, as a high-performance and highly-reliable coding method to be used in a wide range of communication fields and information processing fields including mobile communication systems, information storing systems, digital broadcasting systems, or a like, which generates turbo codes (developed by C. Berrou et al.) as an error-correcting code to realize a transmission characteristic being near to a limit shown in Shannon's theorem has been studied and developed.
First, a turbo encoder and turbo decoder being generally and conventionally used are described. FIGS. 4 and 5 are schematic block diagrams showing configurations of the conventional turbo encoder and turbo decoder. FIG. 4 is a schematic block diagram showing the turbo encoder having a code rate being ⅓. The turbo encoder shown in FIG. 4 includes convolutional encoders E11 and E22 and an interleaver E21. A sequence of information bits to be encoded E01, is divided and is transmitted as a sequence of systematic bits and is input to the convolutional encoder E11 and to the interleaver E21.
The convolutional encoder E11 encodes the sequence of the information bit to be encoded E01 using an error-correcting code and outputs a sequence of a parity bit E12. The interleaver E21, by generally writing the sequence of the information bit to be encoded E01 once into memory and by reading the sequence of the information bit to be encoded E01 from the memory in the order being different from that used in the writing, outputs data having an interleaved order of the data to the convolutional encoder E22. The convolutional encoder E22, by encoding the sequence of the interleaved information bit to be encoded using an element code and outputs a parity bit sequence E23. As the convolutional encoders E11 and E22, an RSC (Recursive Systematic Convolutional) encoder is ordinarily used.
As shown in FIG. 5, the turbo decoder includes convolutional decoders D04 and D16, an interleaver D12, a deinterleaver D23, and a hard-decision making and result outputting block D25. To the convolutional decoder D04 are input a systematic information sequence D01 corresponding to the systematic bit sequence E02, a parity information sequence D02 corresponding to the parity bit sequence E12, and an extrinsic information D03. Output extrinsic information D11 is then used by the next convolutional decoder D16.
Furthermore, the obtained extrinsic information D11 and systematic information sequence D01, together with a parity information sequence D14 corresponding to the parity bit sequence E23, are input through the interleaver D12 to the convolutional decoder D16. Then, soft-output information obtained by the convolutional decoder D16 and an extrinsic information D22 are output to the deinterleaver D23.
The deinterleaver D23 outputs information in the order being the reverse of the order in which data is interleaved by the interleaver D12. That is, the interleaved order of soft-output information D21 and extrinsic information D22 is restored to the original order of the information before being interleaved and the information is output as soft-decision information D24 and the extrinsic information D03. The hard-decision making and result outputting block D25 makes a hard decision on the soft-decision output information D24 and outputs a finally decoded result. The extrinsic information D03 is fed back to the convolutional decoder D04 so that the information is used in subsequent processing.
As described above, in the turbo decoder shown in FIG. 5, while extrinsic information D03 and D15 output from the two convolutional decoders D04 and D16 are being renewed, decoding processing is repeated and, after a plurality of numbers of times of looping processing has been performed, a hard-decision is made on the soft-decision output information D24.
It is reported that, as a soft-output decoding method to be applied to a convolutional decoder for the turbo decoding, a MAP (Maximum A Posteriori Probability) decoding method is the best at present. However, a Max-Log-MAP (Max Logarithmic Maximum A Posteriori) decoding method is generally and widely used in which, since a scale of the convolutional decoder or an amount of processing is remarkably large, at the time when the algorithm of the method is actually employed in the convolutional decoder, its processing is made simplified by determining whether the transmitted data is “1” or “0” according to a maximum value of likelihood.
The MAP algorithm serves as a maximum likelihood decoding algorithm using a Trellis diagram. FIG. 6A shows a diagram illustrating an example of configurations of a convolutional decoder in which the number of registers is three. The Trellis diagram, as shown in FIG. 6B, shows a relation between an output value obtained when a value is input to the convolutional decoder and states of registers.
The MAP algorithm is classified roughly into the following three types:    (a) Forward processing: Probability (forward path metric value) of reaching from a head point in a Trellis diagram to each state at each time point in the Trellis diagram is calculated.    (b) Backward processing: Probability (backward path metric value) of reaching from an end point in a Trellis diagram to each state at each time in the Trellis diagram is calculated.    (c) Soft-output generating processing and calculations of extrinsic value: A soft-output value of a systematic bit at each time point is calculated by using results from the above (a) forward processing and (b) backward processing. Then, by using the soft-output value, an extrinsic value is calculated.
In the Trellis diagram, the forward path metric value and the backward path metric value calculated in each of the forward processing and backward processing at a time point “t” and in a state “s” are represented as Alpha (t, s) and Beta (t, s), respectively. Moreover, probability of transition from the state “s” to the state “s′” at the time point “t” is represented as Gamma (t, s, s′) (here, the Gamma is called a “branch metric value”). The Gamma is a probability that can be obtained from a received value (systematic information sequence, parity information sequence, and extrinsic information).
Each of the forward processing, backward processing, soft-output generating processing, and extrinsic value calculations described above are made as follows:
(a) Forward processing:Alpha (t, s)=Max {Alpha (t−1, s′)+Gamma (t, s′, s)}Here, the equation indicates that the processing “Max” to calculate a maximum value is performed in all states “s′”
As shown in FIG. 7A, Alpha (t, S3) for the State (S3) at time point “t” is calculated as follows: Branch metric Gamma (t, S1, S3) and Gamma (t, S2, S3) are added respectively to a path metric Alpha (t−1, S1) and path metric Alpha (t−1, S2) occurring in two pre-stage States and a value being larger becomes an Alpha value in the State (this is called “Alpha ACS arithmetic calculations”). This processing is performed in all the States at all time transitions “t” and an Alpha value in all the States is held.
Since no Alpha value occurs at a previous-stage in the first calculations of an Alpha value, setting of an initial value is required. Here, since the transition starts all the time from the State #0 in the Trellis diagram, as the initial Alpha value, the Alpha value is designated as “0” in the Stage #0 and Alpha value as a “−MAX” value (minimum value) in states other than the State #0.
(a) Backward Processing:Beta (t, s)=Max {Beta(t+1, s′)+Gamma (t+1, s′, s)}
As shown in FIG. 7B, Beta (t+1, S4, S5) for the State (S4) at time point “t” is calculated as follows: Branch metric Gamma (t+1, S4, S5) and Gamma (t+1, S4, S6) are added respectively to path metric Beta (t+1, S5) and path metric Beta (t+1, S6) occurring in two rear-stage States S5 and S6 and a value being larger becomes an Alpha value in the State (this is called “Beta ACS arithmetic calculations”).
This Beta calculating processing is performed in all the States at all time transitions “t” from a direction being reverse to the Alpha value (from a final State in the Trellis diagram). Since no Beta value occurs at a rear-stage in the first calculations of a Beta value, setting of an initial value is required. Here, in the final end of the Trellis diagram, as the initial Beta value, the Beta value is designated as “0” in the Stage #0 and Beta value as a “−MAX” value (minimum value) in States other than the State #0.
(c) Soft-output generating processing and extrinsic value calculations:
By addition of the Alpha value (t−1, s′), Beta value (t, s) and Gamma value (t, s′, s) that have been obtained by the above calculations, all the path metric values at the time point “t” are calculated. A differential between a maximum path metric value of a path having a record result of “0” and a maximum path metric value of a path having a record result of “1” becomes a soft-output value at the time point “t”.
As shown in FIG. 8, to all combined value (s, s′) at the time point t being 5 are added the Alpha value (4, s′), Beta value (5, s) and Gamma value (5, s′, s). Out of them, a maximum path metric value L0 (t) of a path having a record result of 0 is calculated. In the example, the following equation holds:
      L    ⁢                  ⁢    0    ⁢                  ⁢          (              t        =        5            )        =    ⁢            Alpha      ⁢                          ⁢              (                              t            =            4                    ,                      state            ⁢                                                  ⁢            #7                          )              +          Beta      ⁢                          ⁢              (                              t            =            5                    ,                      state            ⁢                                                  ⁢            #6                          )              +          Gamma      ⁢                          ⁢              (                              t            =            5                    ,                      state            ⁢                                                  ⁢            #7                    ,                      state            ⁢                                                  ⁢            #6                          )            On the other hand, a maximum path metric value L1 (t) of a path having a record result of 1 is calculated. In the example, the following equation holds:
      L    ⁢                  ⁢    1    ⁢                  ⁢          (              t        =        5            )        =    ⁢            Alpha      ⁢                          ⁢              (                              t            =            4                    ,                      state            ⁢                                                  ⁢            #4                          )              +          Beta      ⁢                          ⁢              (                              t            =            5                    ,                      state            ⁢                                                  ⁢            #0                          )              +          Gamma      ⁢                          ⁢              (                              t            =            5                    ,                      state            ⁢                                                  ⁢            #4                    ,                      state            ⁢                                                  ⁢            #0                          )            Then, a soft-output value at the time “t” being 5 is calculated as below:L(t=5)=L0(t=5)−L1(t=5)
Also, in the Max-Log-MAP algorithm, a value obtained by deducting a channel value (value obtained from a received value) and a prior value (extrinsic information to be fed from the decoder in the rear stage) from the soft-output value (a posteriori value) obtained by the above processing becomes extrinsic information.
As described above, in the ideal Max-Log-MAP algorithm, arithmetic operations of the Alpha value and Beta value are performed at one time on all data to be decoded.
However, as a data length becomes larger in the data transmission, tremendous memory areas for the Max-Log-MAP decoding method are required. Especially, a memory area to store path metric value information of an entire Trellis diagram becomes necessary. Also, due to the increased length of data to be decoded, a delay in decoding processing increases and, therefore, it is difficult to actually apply the Max-Log-MAP decoding method to a real-time system.
To solve this problem, a sliding window method is widely used (see A. J. Viterbi, “An Intuitive Justification and a Simplified Implementation of the MAP Decoder for Convolutional Codes,” IEEE J. Select. Areas Commu., Vol. 16, pp. 260-264, February 1998). In the method, only likelihood information for one window size in a Trellis diagram is stored and by making the window position be shifted until the position reaches a decoding length, considerable savings in memory can be achieved.
Moreover, to overcome the degradation in decoding capability caused by an inconstant Beta initial value at each window, prior to the calculations of Beta values, Beta training processing is performed on the Beta initial value for one window. However, as the number of times of decoding increases, reliability in the calculations of an initial value by the Beta training section is improved and excellent capability can be obtained by using a shorter training length (data training length, that is, training size).
On the other hand, when a communication environment is in a bad state, the degradation in decoding capability cannot be improved even by setting as a fixed training length for one window. As a result, since needless calculations of the Beta value training can be eliminated, due to an increase in the number of times of iterated decoding, a method of gradually shortening a value of a Beta training length is required.
FIG. 9 is a block diagram showing a function of a conventional sliding window turbo decoder to perform training processing on Beta initial values prior to the Beta value calculations. FIG. 10 is a timing chart showing operational processes shown in the block diagram of FIG. 9. As shown in FIG. 9, systematic information 01 for one window, parity information 02 for one window, and extrinsic information 03 obtained by the convolutional decoder existing one before the sliding window turbo decoder are stored in the input memory 21 to perform forward processing (Alpha value calculations).
Moreover, in order to perform training processing on Beta initial values, the Beta path metric value obtained in the previous decoding iteration and stored in a Beta initial value memory 14 is used as an initial value 04 by a Beta training calculating section 12. By making a Beta value calculating section 32 calculate Beta values, a final value of a Beta value to be obtained at each window is stored in the Beta initial value memory 14. Information stored in an input memory 31 and information stored in the Beta initial value memory 14 are input to the Beta value calculating section 32.
Then, on the Beta path metric value obtained as above are done LLR (Log Likelihood Ratio, that is, soft-output value) calculations in an LLR calculating section 41 in a manner in which the calculations of the Beta path metric value are made in synchronization with that of the Alpha path metric value stored in the Alpha value memory 23. Moreover, in order to obtain an initial value for Beta training processing of subsequent decoding, a result from the Beta value calculating section 32 is also written to the Beta initial value memory 14. The result of the calculations from the LLR calculating section 41 is input through the interleaver/deinterleaver 45 to an extrinsic value calculating section 42 where the calculations of extrinsic information are done. The extrinsic information being the calculation result is used as the input extrinsic information 03 for the subsequent convolutional decoder D04 or D16 (FIG. 5).
At the time of decoding to be performed by the convolutional decoder D16 mounted at the rear stage where the number of times of decoding reaches a pre-set maximum value, the calculation result from the LLR calculating section 41 is fed to the hard-decision making section 44 for the processing of making a hard-decision.
In the above conventional sliding window turbo decoding method, as shown in the operation process diagram of FIG. 10, Beta training length is fixed. FIG. 10 shows an example in which the code block size is 512, window size is 128, and maximum number of times of decoding is four.
Next, the reason why the Beta training processing is performed is explained. To achieve an ideal decoding, in the forward processing, the Alpha values are calculated in the order of input code block lengths of 1 to 512 and the Beta values are calculated in the order of the input code block lengths of 512 to 1. However, in the sliding window turbo decoding, arithmetic operations are performed with the Alpha value calculations and with the Beta value calculations both being partitioned by windows and, therefore, it is impossible to determine an initial value of the Beta value.
For example, when the window size is 128, in the Alpha value calculations, the calculations of the Alpha values are done in the order of 1 to 128 at a first window and a value at a final window serves as an initial value in a subsequent window (129 to 256) and a final value in the subsequent window (129 to 256) serves as an initial value in a subsequent window (257 to 384) (hereafter, the same as above) and, as a result, training of the Alpha initial value at each window is not required.
On the contrary, in the Beta value calculating processes, the calculations of the Beta values are done in the order of 128 to 1, however, since the Beta value of 129 is unknown, predefined training is required. Also, at a subsequent window (256 to 129), the Beta value of 257 is unknown, predefined training is required (hereafter, the same) and training of a Beta initial value at each window is required.
Moreover, an example of the sliding window turbo decoding method is disclosed in Japanese Patent Application Laid-open No. 2002-314437.
As described above, to overcome the degradation of decoding capability caused by an inconstant Beta initial value at each window, prior to the Beta calculations, the initial value training processing for one window is performed and, with the increase in the number of times of decoding, reliability of an initial value in the Beta training section is improved, thus enabling excellent decoding capability to be obtained even by using the shorter training length.
On the other hand, when an environment of a communication channel is in a bad condition, even if a fixed training length for one window is set, it is impossible to improve the degradation in decoding capability. Therefore, in order to eliminate needless Beta training calculations, a setting method is required for gradually shortening the Beta training length by increasing the number of times of iterated decoding.