Conventionally, the turbo coding has drawn attention after its first introduction in 1993 because of its high coding gain, and its application to channel coding for next generation mobile communications systems and the like has been investigated.
The outline of the turbo codes is detailed, for example, in the literature (Claude Berrow ‘Near Optimum error correcting coding And Decoding: Turbo codes’, IEEE Transactions on Communications, Vol.44 No.10, Oct. 1996, pp. 1262–1271) and the literature (Motohiko Isaka/Hideki Imai ‘Fingerpost to Shannon limit: “parallel concatenated (Turbo) coding”, “Turbo (iterative) decoding” and its surroundings′, Technical Report IT98-51 December 1998 of The Institute of Electronics, Information and Communication Engineers.
The features of the Turbo code are briefly listed below:    (i) A plurality of identical constituent encoders are concatenated in parallel or in series.    (ii) An interleaver is used to uncorrelate inputs of data sequences to individual encoders. An interleaver having a high randomizing performance is preferred.    (iii) At the decoder end, based on the soft-decision input data, soft-decision decoded data and its soft-decision likelihood data as to likelihood information are output.    (iv) At the decoder end, soft-decision likelihood data is used as renewal likelihood information to implement iterative decoding.
Soft-decision data implies a value, which is not binary data consisting of 1 and 0 but is represented by a particular number of bits. Soft-decision input data implies received soft-decision data. Soft-decision likelihood data implies the likelihood of data to be decoded in terms of soft-decision data. Soft-decision decoded data implies decoded data calculated based on the soft-decision input data and the soft-decision likelihood data.
The principle of turbo coding will be described briefly. Here, a typical configuration in which encoders are concatenated in parallel is assumed with a code rate of ⅓ and a constraint length K=3. FIG. 10 shows a configuration of the turbo encoder.
This turbo encoder is schematically configured of a recursive systematic convolutional (RSC) encoder 400, a recursive systematic convolutional encoder 410 and an interleaver 420.
RSC encoders 400 and 410 are encoders having an identical configuration for implementing recursive systematic convolution.
Interleaver 420 is a permuter which rearranges an input data sequence X to generate a data sequence X′ consisting of the same elements but in a different order.
The operation of RSC encoders 400 and 410 will be described. As shown in FIG. 11, this encoder can be configured of two shift registers, i.e., registers 430 and two modulo adders 440.
The internal states (b1, b2) in this encoder are represented as the values in the shift registers, there being four internal states, namely internal state (00), internal state (01), internal state (10) and internal state (11). When an input is given, every state has two possible internal states to transfer.
FIG. 12 shows state transitions in RSC encoders 400 and 410.
For the internal state (00), if the input is 0, the internal state transfers to the internal state (00) and the code output is 0; and if the input is 1, the internal state transfers to the internal state (10) and the output code is 1.
For the internal state (01), if the input is 0, the internal state transfers to the internal state (10) and the code output is 0; and if the input is 1, the internal state transfers to the internal state (00) and the output code is 1.
For the internal state (10), if the input is 0, the internal state transfers to the internal state (11) and the code output is 1; and if the input is 1, the internal state transfers to the internal state (01) and the output code is 0.
For the internal state (11), if the input is 0, the internal state transfers to the internal state (01) and the code output is 1; and if the input is 1, the internal state transfers to the internal state (11) and the output code is 0.
The data sequence obtained by subjecting input data sequence X to convolutional coding through RSC encoder 400 is referred to as data sequence Y1. The data sequence obtained by subjecting input data sequence X′, which has been obtained by interleaving input data sequence X through interleaver 420, to convolutional coding through RSC encoder 410 is referred to as data sequence Y2.
In other words, the first code sequence Y1 and the second code sequence Y2 are generated from input data sequence X. The RSC encoders output X, Y1 and Y2 in parallel.
When assuming that input data sequence X is (0,1,1,0,0,0,1) and the data sequence X′ after interleaving by interleaver 420 is (0,0,1,0,1,0,1), the convolutional codes result in Y1 (0,1,0,0,1,1,1) and Y2 (0,0,1,1,0,1,1). The evolutions of the internal state transitions in RSC encoders are shown in FIGS. 13 and 14. FIG. 13 is a diagram showing the internal state transitions for Y1 and FIG. 14 is a diagram showing the internal state transitions for Y2. In FIGS. 13 and 14, the thick lines denote the evolutions of state transitions.
Next, the basic constructional example of a turbo decoder is shown in FIG. 15. Each of blocks 600, 610, 620, 630, denoted with Iteration 1 to Iteration n, constitutes one decoding unit. Here, “Iterations” shown as being connected from one to another, implies repeated operations of the processes.
FIG. 16 shows a decoding unit block, or a block diagram of the components for each Iteration process. This decoding unit block is comprised of soft-decision decoders 10 and 11, interleavers 20 and 21, a recovery interleaver 22 and decoding calculator 33. The likelihood information E to be input to the first block 600 has an initial value (0,0,0,0, . . . 0,0: value 0).
Soft-decision decoders 10 and 11 are decoders which output a soft-decision output based on a soft-decision input.
Interleavers 20 and 21 are permuters for implementing the same operation as the interleaver 420 used on the RSC encoder side.
Recovering interleaver 22 is a permuter for recovery of the original data sequence from the data sequence which was permuted by interleavers 20 and 21.
Decoding operation part 33 is a means for generating data after error correction.
The signals input to the block are received signal sequences (X, Y1, Y2) and likelihood information E of the signals. Of these, signals (X, Y1, E) constitute one set, and are supplied to soft-decision decoder 10, where the first error correction to the soft-decision soft output is implemented. This process executes the error correction process corresponding to the encoding in RSC encoder 400 so as to generate renewed likelihood information E1 as to each signal.
Next, the second soft-decision soft output error correction is implemented based on signals (X, Y2, E1), involving the renewed likelihood information E1. In this case, since permutation of data was performed by interleaver 420 before convolutional coding in RSC encoder 410 (see FIG. 10), the data of likelihood information E1 is permuted by interleaver 20 and data of signal X is permuted by interleaver 21, respectively, so as to produce new data sequences (X′, E1′). These new data sequences (X′, E1′) are supplied to soft-decision decoder 11, where the error correction process corresponding to the encoding in RSC 410 is implemented so that renewed likelihood information E2 as to each signal is generated. Here, since the data sequences (X′, E1′) necessary for the error correction process corresponding to the encoding in RSC 410 for the next iterative process have been rearranged by interleaver 20 and interleaver 21, likelihood information E2 is processed through recovering interleaver 22 so that the data is rearranged and output as likelihood information E2′. The likelihood information E2′ is used as the likelihood information when iterative operations are repeated.
At decoding calculator 33, decoded result X″ at this point can be obtained.
Various possible techniques for decoding turbo codes can be considered but there are two dominant methods, namely, SOVA (Soft Output Viterbi Algorithm) which is characterized by outputting data as soft output based on the application of the Viterbi algorithm, and Log-MAP (Maximum A Posteriori Probability) which is an improvement in computational efficiency of the maximum likelihood decision method. Here, one computational example based on the Log-MAP process will be described.
To begin with, the following notations are used in the description:
m: internal state
m′: internal state (before transition)
St: internal state at time t
Xt: output information at time t
X: estimated output
Yt: received data information at time t    (a): pt(m|m′): the probability of transition to internal state m from internal state m′:pt(m|m′)=Pr(St=m|St−1=m′)    (b): qt(X|m, m′): the probability that the output in the case of transition from internal state m′ to m is X:qt(X|m, m′)=Pr(Xt=X|St=m ; St−1=m′)    (c): R(Yt, X): the probability that the transmitted data is X and the received data is Yt.    (d): γt(m, m′): the probability of transition from internal state m′ to m when the received data is Yt:
            γ      t        ⁡          (              m        ,                  m          ′                    )        =            ∑      X                            ⁢                  ⁢                            P          t                ⁡                  (                      m            |                          m              ′                                )                    ·                        q          t                ⁡                  (                                    X              |              m                        ,                          m              ′                                )                    ·              R        ⁡                  (                                    Y              t                        ,            X                    )                    for all possible X.    (e): σt(m): the probability of the internal state at time t is m :
            α      t        ⁡          (      m      )        =            max              m        ′              ⁢          {                                    α                          t              -              1                                ⁡                      (                          m              ′                        )                          +                              γ            t                    ⁡                      (                          m              ,                              m                ′                                      )                              }      for all possible m′.    (f): βt(m): the probability of the internal state at time t is m :
            β      t        ⁡          (      m      )        =            max              m        ′              ⁢          {                                    β                          t              +              1                                ⁡                      (                          m              ′                        )                          +                              γ                          t              +              1                                ⁡                      (                          m              ,                              m                ′                                      )                              }      for all possible m′.    (g): σt(m, m′): the probability of internal state m′ at time t transferring to m :σt(m,m′)=αt−1(m′)+γt(m′,m)+βt(m)
Next, computational procedures will be described.
The above precomputable values, (a): pt(m|m′) and (b): qt(X|m, m′), should be determined or optimized in advance. Specifically, (a): Pt(m|m′) for the turbo encoder shown in FIG. 10 are given as the table 1 below.
TABLE 1For t = 1,pt(0|0) = 0.5,pt(1|0) = 0,pt(2|0) = 0.5,pt(3|0) = 0,pt(0|1) = 0,pt(1|1) = 0,pt(2|1) = 0,pt(3|1) = 0,pt(0|2) = 0,pt(1|2) = 0,pt(2|2) = 0,pt(3|2) = 0,pt(0|3) = 0,pt(1|3) = 0,pt(2|3) = 0,pt(3|3)= 0,For t = 2,pt(0|0) = 0.5,pt(1|0) = 0,pt(2|0) = 0.5,pt(3|0) = 0,pt(0|1) = 0,pt(1|1) = 0,pt(2|1) = 0,pt(3|1) = 0,pt(0|2) = 0,pt(1|2) = 0.5,pt(2|2) = 0,pt(3|2) = 0.5,pt(0|3) = 0,pt(1|3) = 0,pt(2|3) = 0,pt(3|3) = 0,For t > 2,pt(0|0) = 0.5,pt(1|0) = 0,pt(2|0) = 0.5pt(3|0) = 0,pt(0|1) = 0.5,pt(1|1) = 0,pt(2|1) = 0.5pt(3|1) = 0,pt(0|2) = 0,pt(1|2) = 0.5,pt(2|2) = 0,pt(3|2) = 0.5,pt(0|3) = 0,pt(1|3) = 0.5,pt(2|3) = 0,pt(3|3) = 0.5,(b): qt(X | m, m′)
TABLE 2qt(0|0,0) = 1,qt(1|2,0) = 1,qt(1|0,1) = 1,qt(0|2,1) = 1,qt(1|1,2) = 1,qt(0|3,2) = 1,qt(0|1,3) = 1,qt(1|3,3) = 1,qt(X | m, m′) = 0, for other than the above.
Here, X is considered only on the transmitted data information.
Next, for each piece of the received data, (c): R(Yt, X) is determined, and calculations of (d): γt(m, m′) and (e): αt(m) are repeated for all the pieces of the received data. As to (c), for example, Gaussian distribution having a standard deviation σ may be assumed. The probability density function P(X) in this case is represented as the following equation and its graph is shown in FIG. 17. Here, m=−1 or +1.
      P    ⁡          (      X      )        =            1                        2          ⁢                                          ⁢          π          ⁢                                          ⁢                      σ            2                                ⁢          ⅇ              -                                            (                              x                -                m                            )                        2                                2            ⁢                          σ              2                                          
The first sequence 500 in FIG. 17 represents data p(Yt|Xt=−1) when the input data is 1(−1) and the second sequence 510 represents data p(Yt|Xt=+1) when the input data is 0(+1). The horizontal axis represents Yt values and vertical axis represents p(Yt|Xt=−1) and p(Yt|Xt=+1) values. The R(Yt, X) can be given by the following equation, taking these into consideration.
      R    ⁡          (                        Y          t                ,        X            )        =            log      ⁡              [                              p            ⁡                          (                                                                    Y                    t                                    |                                      X                    t                                                  =                                  +                  1                                            )                                            p            ⁡                          (                                                                    Y                    t                                    |                                      X                    t                                                  =                                  -                  1                                            )                                      ]              =                  log        ⁡                  [                                                    1                                                      2                    ⁢                                                                                  ⁢                    π                    ⁢                                                                                  ⁢                                          σ                      2                                                                                  ⁢                              ⅇ                                  -                                                                                    (                                                                              Y                            t                                                    -                          1                                                )                                            2                                                              2                      ⁢                                              σ                        2                                                                                                                                                1                                                      2                    ⁢                                                                                  ⁢                    π                    ⁢                                                                                  ⁢                                          σ                      2                                                                                  ⁢                              ⅇ                                  -                                                                                    (                                                                              Y                            t                                                    +                          1                                                )                                            2                                                              2                      ⁢                                              σ                        2                                                                                                                          ]                    =                                                  -                              1                2                                      ⁢                                          (                                                                            Y                      t                                        -                    1                                    σ                                )                            2                                +                                    1              2                        ⁢                                          (                                                                            Y                      t                                        +                    1                                    σ                                )                            2                                      =                              2                          σ              2                                ⁢                      Y            t                              
=2Yt, when assuming σ=1.
For operation of (e): σt(m), the initial values are σ0(0)=0, σ0(1)=−∞, σ0(2)=−∞ and σ0(3)=−∞.
Next, (f): βt(m) is determined based on the above σt thus determined, and then (g): σt(m, m′) is calculated. Based on (g), the input data is guessed to compute a MAP estimate candidate. This process is repeated for all pieces of data.
The initial values for operation of (f) are βn(0)=0, βn(1)=−∞, βn(2)=−∞ and βn(3)=−∞.
For estimation of a MAP candidate, the following expression is calculated and used as the likelihood information for the decoder.
      log    ⁡          [                                    ∑                                          X                t                            =                              +                1                                                                                    ⁢                                          ⁢                                    σ              t                        ⁡                          (                              m                ,                                  m                  ′                                            )                                                            ∑                                          X                t                            =                              -                1                                                                                    ⁢                                          ⁢                                    σ              t                        ⁡                          (                              m                ,                                  m                  ′                                            )                                          ]        =                    max                              X            t                    =                      +            1                              ⁢              {                              σ            t                    ⁡                      (                          m              ,                              m                ′                                      )                          }              -                  max                              X            t                    =                      -            1                              ⁢              {                              σ            t                    ⁡                      (                          m              ,                              m                ′                                      )                          }            where Σ should be taken for all conceivable m and m′.
This can be derived using the following approximations in Log-MAP:
      ·          log      (                        ∑          i                                                ⁢                                  ⁢                  exp          ⁡                      (                          a              i                        )                              )        ≅            a      M        ⁢                  ⁢          (                        a          M                :                  the          ⁢                                          ⁢          maximum          ⁢                                          ⁢          value          ⁢                                          ⁢          among          ⁢                                          ⁢                      a            i                              )      ·log(eP1·eP2)=P1+P2·log(eP1/eP2)=P1−P2
In order to obtain error corrected final result, based on the likelihood information E1 from the anterior stage soft-decision decoder 10 and the likelihood information E2′ result is generated.
The following formula (1) is used for calculation for the decoded result:-Lc·X+E1+E2′; Lc≅4Ec/No=2/σ2   (1)
The transmitted data can be guessed by checking the sign of the calculated result of the above formula (1). When the sign of the calculation is (+), it can be guessed as (+1) and when the sign is (−), it can be estimated as (−1).
FIG. 18 shows the error correction characteristics of this turbo coding.
In FIG. 18, BER (Bit Error Rate) characteristics are shown: no-coding 700 designates the case of no-coding; IT=1: 710 the case where one iteration is performed; IT=2: 720 the case where iterations are repeated twice; IT=3: 730 the case where iterations are repeated three times; IT=4: 740 the case where iterations are repeated four times; and IT=5: 750 the case where iterations are repeated five times.
Here, Viterbi: 760 shows the BER characteristics when 5 bit soft-decision Viterbi decoding (with a constraint length of 9, Rate=⅓) is implemented.
As seen in the BER characteristics in FIG. 18, according to the turbo coding, the more the number of iterations is increased, the more the error correction performance improves.
However, in the turbo coding, it is possible to improve error correction characteristics by performing iterations of decoding while the amount of processing increases, so that there is a limit to the number of iterations of decoding.
In one word, error correction performance is limited by the limitation of the amount of processing.
With the prior art error correction characteristics shown in FIG. 18, if a bit error rate of 10−6 needs to be achieved, Eb/No should be 3.0 for two times of iterations. For three times of iterations, Eb/No becomes 2.2 or lower. That is, the number of iterations is selected depending on the necessary error correction performance.
Accordingly, if iterations cannot be repeated three times from the viewpoint of the amount of processing, the number of iterations have to be set at 2, hence Eb/No becomes 3.0, resulting in relative degradation of error correcting performance.
The present invention has been devised in order to solve the above problem; it is therefore an object of the present invention to provide a turbo decoder which can implement fine control of iterations.