This application claims priority to an application entitled xe2x80x9cComponent Decoder and Method Thereof in Mobile Communication Systemxe2x80x9d filed in the Korean Industrial Property Office on Oct. 5, 1999 and assigned Serial No. 99-42924, and an application entitled xe2x80x9cData Decoding apparatus and Method Thereof in Communication Systemxe2x80x9d filed in the Korean Industrial Property Office on Oct. 6, 1999 and assigned Serial No. 99-43118, the contents of each are herein incorporated by reference.
1. Field of the Invention
The present invention relates generally to a decoder and a decoding method in a mobile communication system, and in particular, to a component decoder and a method thereof for decoding data modulated with a turbo code that uses recursive systematic convolutional codes (RSCs).
2. Description of the Related Art
Channel codes are widely used for reliable data communication in mobile communication systems such as satellite systems, W-CDMA (Wideband-CDMA), and CDMA 2000. The channel codes include convolutional codes and turbo codes.
In general a convolutional coded signal is decoded using a Viterbi algorithm based on maximum-likelihood (ML) decoding. The Viterbi algorithm accepts a soft value at its input and produces a hard decision value. In many cases, however, soft-output decoders are required in order to improve performance through concatenated decoding. In this context, many schemes have been suggested to achieve soft output or the reliability of decoded symbols. There are two well-known soft-input/soft-output (SISO) decoding methods, namely, a MAP (Maximum A-posteriori Probability) decoding algorithm, and a SOVA (Sof-Output Viterbi Algorithm). The MAP algorithm is considered the best in terms of bit error rate (BER) since it produces a hard decision value in conjunction with an a-posteriori probability, but at the cost of implementation complexity. J. Hagenauer suggested in 1989 the SOVA scheme to which the Viterbi algorithm is generalized. The SOVA outputs a hard decision value and reliability information that is soft output associated with the hard decision value, as well. Hagenauer, however, did not provide the real configuration and operation of the SOVA scheme.
As compared to conventional Viterbi algorithms, SOVA generates a hard decision value and reliability information about the hard decision. That is, the soft output provides the reliability of a decoded symbol as well as the polarity of the decoded symbol, xe2x88x921 or +1, for subsequent decoding. To achieve such reliability information, SOVA calculates path metrics (PMs) for a survivor path (SP) and a competition path (CP) and produces the absolute value of the difference between the PM of the SP and the PM of the CP as the reliability information. The reliability information xcex4 is given by
xcex4=a*|PMsxe2x88x92PMc|, a greater than 0xe2x80x83xe2x80x83(1)
PMs are calculated in the same manner as in a general Viterbi algorithm.
To describe SOVA in detail, a trellis is assumed in which there are S=2kxe2x88x921 (k is a constraint length) states and two branches enter each state.
Given a sufficient delay W, all survivor paths merge into one path in the general Viterbi algorithm. W is also used as the size of a state cell window. In other words, with the state cell window size W set to be sufficient, all survivor paths merge into one path. This is called a maximum likelihood (ML) path. The Viterbi algorithm selects the minimum of m PMs calculated by Eq. (2) to choose a state SK on the path at a given time k.                               Pm          =                                    MIN              ⁢                              {                                                                            E                      S                                                              N                      O                                                        ⁢                                                            ∑                                              j                        =                                                  k                          -                          W                                                                    i                                        ⁢                                          xe2x80x83                                        ⁢                                                                  ∑                                                  n                          =                          1                                                N                                            ⁢                                              xe2x80x83                                            ⁢                                                                        (                                                                                    Y                              jm                                                              (                                m                                )                                                                                      -                                                          x                              jm                                                              (                                m                                )                                                                                                              )                                                2                                                                                            }                            ⁢                              xe2x80x83                            ⁢              for              ⁢                              xe2x80x83                            ⁢              m                        =            1                          ,        2                            (        2        )            
where xjn(m) is an nth bit of an N-bit code symbol at a branch on an mth path at time j, yjn(m) is a received code symbol at the position of the code symbol xjn(m), and ES/NO is a signal-to-noise ratio. The probability of selecting the mth path using Pm, that is, the probability of selecting path 1 or path 2 in Eq. (2) is given by
Pr={path=m}≈exe2x88x92Pm for m=1, 2xe2x80x83xe2x80x83(3)
If a path with a smaller PM is 1 in Eq. (3), the Viterbi algorithm selects path 1. Here, the probability of selecting a wrong path is calculated by                               P          sk                =                                            ⅇ                              -                                  P                  1                                                                                    ⅇ                                  -                                      P                    1                                                              +                              ⅇ                                  -                                      P                    2                                                                                =                      1                          1              +                              ⅇ                Δ                                                                        (        4        )            
where xcex94=P2xe2x88x92P1 greater than 0. Let information bits on path 1 and path 2 at time j be Uj(1) and Uj(2), respectively. Then the Viterbi algorithm generates h errors at all the positions (e0, e1, e2, . . . , enxe2x88x921) with Uj(1)xe2x89xa0Uj(2). If the two paths meet after length xcex4m (xcex4mxe2x89xa6Wm), there exist h different information bits and (xcex4mxe2x88x92h) identical information bits for the length xcex4m. In case a previous wrong decision probability Pj related with path 1 is stored, it can be updated by
Pj←Pj(1xe2x88x92Psh)+(1xe2x88x92Pj)Pskxe2x80x83xe2x80x83(5)
on the assumption that path 1 has been selected.
In Eq. (5), Pj(1xe2x88x92Psk) is the probability of selecting a right path and (1xe2x88x92Pj)Psk is the probability of selecting a wrong path. Eq. (5) represents probability update by adding the right path selecting probability to the wrong path selecting probability.
Such an iterative update operation is implemented with a log likelihood ratio (LLR) expressed as
                                          L            j                    =                      log            ⁡                          (                                                1                  -                                      P                    j                                                                    P                  j                                            )                                      ⁢                  
                ⁢                              L            j                    ←                      min            ⁡                          (                                                L                  j                                ,                                  Δ                  /                  a                                            )                                                          (        6        )            
where xcex94 is P2xe2x88x92P1 and xcex1 is a constant.
In conclusion, in the case that estimated information bits are different on the survivor path (path 1) and the competition path (path 2), namely, Uj(1)xe2x89xa0Uj(2), the SOVA update operation applies only when the LLR at time j is less than the previous LLR.
FIG. 1 illustrates example LLR update on a trellis with four states. To be more specific, going from time t1 to time t2, information bits are identical on the survivor path (path 1) and the competition path (path 2). The LLR update does not apply to this state transition. On the other hand, information bits for the two paths are different going from t2 to t3 and from t3 to t4, for which the LLR is updated. For t3 and t4, the LLR is compared with the previous LLR and updated if the LLR is less than the previous LLR.
The above SOVA scheme can be implemented by a trace-back or chain-back SOVA (hereinafter referred to as TBSOVA). An ML path is traced back for the window size W at each decoding in TBSOVA. The resulting decoding delay brings about implementation problems in the case of high speed applications, for example, a mobile terminal.
It is, therefore, an object of the present invention to provide an apparatus and method for decoding turbo-coded data by RESOVA (Register Exchange SOVA) in a mobile communication system.
It is another object of the present invention to provide a RESOVA decoding apparatus and method for decoding turbo-coded data and convolutional coded data in a mobile communication system.
It is a further object of the present invention to provide a RESOVA decoding apparatus and method which reduce decoding delay and memory size requirements at a receiver for receiving turbo-coded or convolutional coded data in a mobile communication system.
It is still another object of the present invention to provide a RESOVA decoding apparatus and method in a mobile communication system, in which an ML state search window (ML state cell window) outputs an ML state value at time (kxe2x88x92Ds) with respect to an arbitrary time k, and LLR update window outputs an LLR selected based on the ML state value at approximately time (kxe2x88x92Dsxe2x88x92DL) at a component decoder.
It is yet another object of the present invention to provide a decoding apparatus and method in a mobile communication system, in which a component decoder having an ML state search window and an LLR update window receives a virtual code to increase the accuracy of the ML state search at the boundary of a frame and further performs the ML state search on the frame boundary by the size of the ML state search window.
The above objects can be achieved by providing a decoder and a decoding method for decoding data received form a transmitter. The data is encoded with an RSC in a mobile communication system. In the decoder, a branch metric calculating circuit (BMC) calculates branch metric values (BMs) associated with a plurality of input symbols. An add-compare-select circuit (ACS) receives the BMs and previous path metric values (PMs) and generates an plurality of path selection bits and LLR (Log Likelihood Ratio) data including the plurality of path selection bits and reliability information at a first time instant. A maximum likelihood (ML) state searcher has a plurality of cells in an array with rows and columns, connected to one another according to an encoder trellis, cells in each row having a process time, Ds, for outputting the same value of the cells in the last column as an ML state value representing an ML path in response to the path selectors. A delay delays the LLR data received from the ACS by the time Ds. An LLR update circuit has a plurality of processing elements (PEs) in an array with rows and columns, connected according to the encoder trellis, PEs in each row having a process time, DL, for generating updated LLR values from the PEs at a time instant (first time instantxe2x88x92approximately Ds+DL) in response to the delayed LLR data received from the delay. A selector selects one of the updated LLR values based on the ML state value.