1. Field of the Invention
The present invention relates to decoding of encoded and transmitted data in a communication system, and, more particularly, to scaling samples input to maximum a priori (MAP) decoding algorithms.
2. Description of the Related Art
MAP algorithms are employed for processing a channel output signal applied to a receiver. MAP algorithms may be used for both detection (to reconstruct estimates for transmitted symbols) and decoding (to reconstruct user data). A MAP algorithm provides a maximum a posteriori estimate of a state sequence of a finite-state, discrete-time Markov process observed in noise. A MAP algorithm forms a trellis corresponding to possible states (portion of received symbols or data in the sequence) for each received output channel sample per unit increment in time (e.g., clock cycle).
A trellis diagram may represent states, and transitions between states, of the Markov process spanning an interval of time. The number of bits that a state represents is equivalent to the memory of the Markov process. Thus, probabilities (sometimes of the form of log-likelihood ratio (LLR) values) are associated with each transition within the trellis, and probabilities are also associated with each decision for a sample in the sequence. These LLR values are also referred to as reliability information.
A processor implementing a MAP algorithm computes LLR values using α values (forward state probabilities for states in the trellis and also known as a forward recursion) and β values (reverse state probabilities in the trellis and also known as a backward recursion), as described subsequently. The α values are associated with states within the trellis, and these α values are stored in memory. The processor using a MAP algorithm computes values of β, and the α values are then retrieved from memory to compute the final output LLR values.
The variable S is defined as the possible state (from a set of M possible states
            {              s        p            }              p      =      0              M      -      1        )of the Markov process at time i, yl is defined as the noisy channel output sample at time i, and the sample sequence yK is defined as the sequence of length K of noisy channel output samples
            {              y        i            }              i      =      0              K      -      1        .Therefore, ylK is the noisy channel output sample yl at time i in a given sequence yK of length K. For a data block of length K, probability functions at time i may be defined for the Markov process as given in equations (1) through (3):αsl=p(S=s; ylK)  (1)βsl=p(yl+1K|S=s)  (2)γs′,sl=p(S=s; ylK|S′=s′).  (3)where S is the Markov process variable at time i, S′ is the Markov process variable at time i-1, s is the observed state of S of the Markov process at time i, and s′ is the observed state of S′ of the Markov process at time i-1.
The log-likelihood ratio (LLR) value L(ul) for a user's symbol ul at time i may then be calculated as given in equation (4):
                              L          ⁡                      (                          u              i                        )                          =                              log            ⁡                          (                                                p                  (                                                            u                      i                                        =                                                                  +                        1                                            ⁢                                                                                                y                          i                          K                                                )                                                                                                              p                  (                                                            u                      i                                        =                                                                  -                        1                                            ⁢                                                                                                y                          i                          K                                                )                                                                                                        )                                .                                    (        4        )            
Defining α1l and β1l from equations (1) and (2) as the forward and backward recursions (probabilities or state metrics) at time i in state s=l, respectively, and defining γm,ll as the branch metric associated with the transition from state m at time i-1 to state l at time i, then the forward recursion for states is given in equation (5):
                                          α            l            i                    =                                    ∑                              l                ∈                S                                                                                  ⁢                                          α                l                                  i                  -                  1                                            ⁢                              γ                                  m                  ,                  l                                i                                                    ,                            (        5        )            where l ε S is a set of states at time i-1 which have a valid transition to the state l at time i.
Similarly, the backward recursion for states is given in equation (6):
                                          β            l                          i              -              1                                =                                    ∑                              m                ∈                S                                                                                  ⁢                                          β                m                i                            ⁢                              γ                                  l                  ,                  m                                i                                                    ,                            (        6        )            where m εS is a set of states at time i which have a valid transition from the state l to the state m at time i-1.
Once the forward and backward recursions for states are calculated, equation (4) is employed to generate the log-likelihood value (also known as reliability value) L(ul) for each user symbol ul. Thus, equation (4) may be re-written as given in equation (7):
                              L          ⁡                      (                          u              i                        )                          =                  log          ⁡                      (                                                            ∑                                                            (                                              l                        ,                        m                                            )                                        ∈                                          S                      +                                                                                                                              ⁢                                                      α                    l                                          i                      -                      1                                                        ⁢                                      γ                                          l                      ,                      m                                        i                                    ⁢                                      β                    m                    i                                                                                                ∑                                                            (                                              l                        ,                        m                                            )                                        ∈                                          S                      -                                                                                                                              ⁢                                                      α                    l                                          i                      -                      1                                                        ⁢                                      γ                                          l                      ,                      m                                        i                                    ⁢                                      β                    m                    i                                                                        )                                              (        7        )            where a state pair (l, m) ε S+ is defined as a pair that has a transition from state l at time i-1 to state m at time i corresponding to the user symbol ul=“1”, and a state pair (l, m) ε S− is similarly defined as a pair that has a transition from state l at time i-1 to state m at time i corresponding to the user symbol ul=“−1”.
A MAP algorithm may be defined by substituting Aml=log(αml), Bml=log(βml), and Γl,m=log(γl,ml) into the equations (5), (6), and (7). Such substitution is sometimes referred to as the log-MAP algorithm. Also, with the relation that log(e−x+e−y) is equivalent to max(x,y)+log(e−|x−y|+1), the forward and backward recursions of the log MAP algorithm may be described as in equations (8) and (9):
                              A          m          i                =                              max                          l              ∈              S                                ⁢                      *                          (                                                A                  l                                      i                    -                    1                                                  +                                  Γ                                      l                    ,                    m                                    i                                            )                                                          (        8        )                                          B          l                      i            -            1                          =                              max                          m              ∈              S                                ⁢                      *                          (                                                B                  m                                      i                    -                    1                                                  +                                  Γ                                      l                    ,                    m                                    i                                            )                                                          (        9        )            where max* (x, y) is defined as max(x, y)+log((e−|x−y|)+1). Note that equations (8) and (9) may include more than two terms in the max*(●, . . . , ●) operator, so a max*(x, y, . . . , z) operation may be performed as a series of pairs of max*(●,●) calculations. Max(x, y) is defined as the “max term” and log((e−|x−y|)+1) is defined as the “logarithmic correction term.”
One application of the MAP algorithm is a form of iterative decoding termed “turbo decoding,” and a decoder employing turbo decoding of data encoded by a turbo encoder is generally referred to as a turbo decoder. Turbo encoding is an encoding method in which two identical constituent encoders separated by an interleaver are used to encode user data. A commonly employed code rate for the turbo encoder is ⅓ and for each constituent encoder is ½.
The turbo encoder of a transmitter generates three sequences. The sequence
      {          x      i        }        i    =    0        L    -    1  represents the transmitted information bit sequence, the sequences {pl}l=0L-1 and {ql}l=0L-1 represent the parity bit sequences of the first and the second constituent encoders, respectively. These three sequences (bit streams) are combined and transmitted over a channel characterized by Rayleigh fading factors αl and added Gaussian noise variance
      σ    2    =                    N        0            2        .  
Turbo decoding at a receiver employs “soft-input, soft-output” (SISO) constituent decoders separated by a de-interleaver and an interleaver to iteratively decode the turbo encoded user data. For example, the first constituent decoding employs the original input sample sequence, while an interleaver is employed to interleave the original input sample sequence for the second constituent decoding. The de-interleaver is employed to de-interleave the soft decisions of the first constituent decoding that are then used by the second constituent decoding, and the interleaver is employed to re-interleave the soft decisions after the second constituent decoding for the first constituent decoding.
At the receiver, the signal from the channel is sampled to yield the sequence (receive samples) as given in equation (10):
                                                        {                              y                i                            }                                      i              =              0                                      K              -              1                                =                                    {                                                                    α                    i                                    ⁢                                      x                    i                                    ⁢                                                            E                      s                                                                      +                                  n                  i                                            }                                      i              =              0                                      K              -              1                                      ,                                            {                              t                i                            }                                      i              =              0                                      K              -              1                                =                                                                      {                                                                                    α                        i                                            ⁢                                              p                        i                                            ⁢                                                                        E                          s                                                                                      +                                          n                      i                      ′                                                        }                                                  i                  =                  0                                                  K                  -                  1                                            ⁢                                                          ⁢              and              ⁢                                                          ⁢                                                {                                      t                    i                    ′                                    }                                                  i                  =                  0                                                  K                  -                  1                                                      =                                          {                                                                            α                      i                                        ⁢                                          q                      i                                        ⁢                                                                  E                        s                                                                              +                                      n                    i                    ″                                                  }                                            i                =                0                                            K                -                1                                                    ,                            (        10        )            where K is frame size, Es is the energy of a sample, and {nl, nl′, nl″} are the added noise components. Each constituent SISO decoder generates a series of log-likelihood ratio (LLR) values
      {          L      i        }        i    =    0        K    -    1  using the sequence of input samples
            {              y        i            }              i      =      0              K      -      1        ,input extrinsic (a priori) information
      {          z      i        }        i    =    0        K    -    1  (received from an interleaver or de-interleaver, depending on the constituent decoder), and newly generated extrinsic information
      {          l      i        }        i    =    0        K    -    1  to be applied to either the interleaver or de-interleaver (depending on the constituent decoder) for the next iteration. The ith LLR value Ll is generated according to equation (11):
                              L          i                =                                                            2                ⁢                                  α                  i                                ⁢                                                      E                    s                                                                              σ                2                                      ⁢                          y              i                                +                      z            i                    +                                    l              i                        .                                              (        11        )            
From equation (11), one skilled in the art would recognize that it is desirable to scale the received samples that are employed to generate extrinsic information. Such scaling is desirably based on the signal-to-noise ratio (SNR) values, since with each iteration the LLR values are each increasing by an amount related to the SNR value. Consequently, the samples of a slot are divided by a scaling factor that is proportional to the SNR of the observed input samples.
For turbo decoding, an estimate of a scaling value is generated and applied to input samples to avoid severe performance degradation of the decoder. Scaling in the turbo decoder using log-MAP decoding is required for the logarithmic correction term (typically implemented via a look-up table). One method employed in the prior art to scale soft samples properly uses a fixed-value look-up table, and another method programs the values within the look-up table according to SNR estimation. Finally, a combination of these two methods scales the soft samples with a control constant to program the look-up table entries. Fixed-point precision issues arise when scaling and adjusting values in the look-up table.
For a given implementation, the dynamic range is also modified for efficient soft sample representation. In finite precision format, each scaling operation represents re-quantization, which has a corresponding truncation error and saturation error. Thus, an estimator desirably provides SNR values both as a scaling factor for the max* (log-MAP) decoding as well as a scaling factor used to adjust the dynamic range of the input soft samples toward a desired dynamic range. These two estimated factors are ideally the same. Power control and automatic gain control (AGC) of the receiver are also affected by scaling, and so the estimator should reflect differences in transmission power. For relatively good SNR estimation generated locally in real time, the estimator needs to be short, but a short estimator generally produces more estimation errors.
In addition to the interleaving of the turbo encoder, channel interleaving of the encoded data at the transmitter may be employed to randomly distribute burst errors of the channel through a de-interleaved sequence representing the encoded data. This process is distinguished herein as “channel interleaving” and “channel de-interleaving.” Since the soft sample sequence after channel de-interleaving does not reflect the time order of channel propagation, estimators and scaling methods tend to exhibit a random nature (i.e., processing after channel de-interleaving is virtually a “blind” estimation approach). Thus, scaling is often applied before channel de-interleaving to reflect the natural propagation order. In CDMA systems, a RAKE receiver demands higher bit precision (higher number of bits) for data path processing, while a turbo decoder requires lower bit precision. Reduction of bit precision is accomplished using digital re-quantization that is typically done before de-interleaving the input samples. Therefore, scaling is preferably applied before this re-quantization (or the scaling factors estimated prior to requantization) for accuracy. Typically, a turbo decoder tolerates more overestimation than underestimation. For some systems, accuracy of SNR estimates to within −2 dB and 6 dB is required to achieve acceptable performance degradation in static channel.
Down-link power control schemes in UMTS WCDMA systems include a base station that adjusts its transmitting power, with certain delay, according to the received TPC (transmitting power control) bit. Power control may be defined over groups of bits, and, for example, power control of the transmitted signal may be updated 1500 times per second. A slot is defined as a unit time duration of one fixed transmission power control, which for the example is 1/1500 second. In short, the transmitting power remains constant only for a slot and is constantly in change slot by slot.
The down-link power for kth slot is adjusted according to equation (12):P(k)=P(k−1)+PTPC(k)+Pbal(k),  (12)where PTPC(k) is the power adjustment due to inner loop power control, and Pbal(k) is the correction according to the down-link power control procedure for balancing radio link power to a common reference power. The value for PTPC(k) is given in equations (13) through (17) as follows:
A) if the value of Limited Power Raise Used parameter is ‘Not used’, thenPTPC(k)=+ΔTPC, if TPCest(k)=1,  (13)PTPC(k)=−ΔTPC, if TPCest(k)=0,  (14)
B) else if the value of Limited Power Raise Used parameter is ‘Used’, thenPTPC(k)=+ΔTPC, if TPCest(k)=1 and Δsum(k)+ΔTPC<Power_Raise_Limit,  (15)PTPC(k)=0, if TPCest(k)=1 and Δsum(k)+ΔTPC≧Power_Raise_Limit,  (16)PTPC(k)=−ΔTPC, if TPCest(k)=0,  (17)where the value Δsum(k) is the temporary sum of the last inner loop power adjustments given in equation (18):
                                                        Δ              sum                        ⁡                          (              k              )                                =                                    ∑                              i                =                                  k                  -                                      DLPA_Window                    ⁢                    _Size                                    +                  1                                                            k                -                1                                      ⁢                                          P                TPC                            ⁡                              (                i                )                                                    ,                            (        18        )            and DLPA_Window_Size is the length of the sample window used for an update. The power control step size ΔTPC may comprise, for example, one of four values: 0.5, 1.0, 1.5 or 2.0 dB.
Scaling factors derived with inter-slot estimation methods average out a portion of the noise perturbation in the transmitted signal. Inter-slot estimation is also related to the corresponding transmitter power control method. Defining
                    E        s            ⋀              N      0        ⁢      (    i    )  as the estimated online SNR for the ith slot, equation (19) yields an average SNR of the ith slot (SNR(i)) for the final scaling factor:
                                          SNR            ⁡                          (              i              )                                =                                                    λ                1                            ⁢                                                                    E                    s                                    ⋀                                                  N                  0                                            ⁢                              (                i                )                                      +                                          λ                2                            ⁢                              {                                                                                                                              E                          s                                                ⋀                                                                    N                        0                                                              ⁢                                          (                                              i                        -                        1                                            )                                                        +                                                            P                      TPC                                        ⁡                                          (                      i                      )                                                        +                                                            P                      bal                                        ⁡                                          (                      i                      )                                                                      }                                                    ,                            (        19        )            where λ1 and λ2 are positive numbers that add to one (that is λ1+λ2=1). When λ1=0 and λ2=1, the online SNR for this slot is purely based on the previous estimation and the power control adjustment. On the other hand, when λ1=1 and λ2=0, no estimation of the previous slot is used. For this case, the scaling factor SNR(i) becomes slot-based and dependent upon the SNR of the slot modified in accordance with the power control.