Turbo Coding (i.e.—TC) is used for error control coding in digital communications and signal processing. The following references give some examples of various implementations of the TC: “Near Shannon limit error correcting coding and decoding: turbo-codes”, by Berrou, Glavieux, Thitimajshima, IEEE International Conference of Communication. Geneva Switzerland, pp. 1064-1070, May 1993; “Implementation and Performance of a Turbo/MAP Decoder”, Pietrobon, International Journal of Satellite Communication; “Turbo Coding”, Heegard and Wicker, Kluwer Academic Publishers 1999.
MAP algorithm and soft output Viterbi algorithm (SOVA) are Soft Input Soft Output (i.e.—SISO) decoding algorithms that have gained wide acceptance in the area of communications. Both algorithms are mentioned in U.S Pat. No. 5,933,462 of Viterbi et al.
The TC has gained wide acceptance in the area of communications, such as in cellular networks, modems, and satellite communications. Some turbo encoders consists of two parallel-concatenated systematic convolutional encoders separated by a random interleaver. A turbo decoder has two soft-in soft-out (SISO) decoders. The output of the first SISO is coupled to the input of the second SISO via a first interleaver, while the output of the second SISO is coupled to an input of the first SISO via a feedback loop that includes a deinterleaver.
A common SISO decoder uses either a maximum a posteriori (i.e.—MAP) decoding algorithm or a Log MAP decoding algorithm. The latter algorithm is analogues to the former algorithm but is performed in the logarithmic domain. Another common decoding algorithm is the max log MAP algorithm. The log MAP is analogues to the log MAP but the implementation of the former involves an addition of correction factor. Briefly, the MAP finds the most likely information bit to have been transmitted in a coded sequence.
The output signals of a convolutional encoder are transmitted via a channel and are received by a receiver that has a turbo decoder. The channel usually adds noise to the transmitted signal.
During the decoding process a trellis of the possible states of the coding is defined. The trellis includes a plurality of nodes (states), organized in T stages, each stage has N=2sup(K−1) nodes, whereas T being the number of received samples taken into account for evaluating which bit was transmitted from a transmitter having the convolutional encoder and K is the constraint length of the code used for encoding. Each stage is comprised of states that represent a given time. Each state is characterized by a forward state metric, commonly referred to as alpha (α or a) and by a backward state metric, commonly referred to as beta (β or b). Each transition from a state to another state is characterized by a branch metric, commonly referred to as gamma (γ).
Alphas, betas and gammas are used to evaluate a probability factor that indicates which signal was transmitted. This probability factor is commonly known as lambda (Λ). A transition from a stage to an adjacent stage is represented by a single lambda.
The articles mentioned above describe prior art methods for performing MAP algorithm, these prior art methods comprise of three steps. During the first step the alphas that are associated with all the trellis states are calculated, starting with the states of the first level of depth and moving forward. During the second step the betas associated with all the trellis states are calculated, starting with the states of the L'th level of depth and moving backwards. Usually, while betas are calculated the lambdas can also be calculated. Usually, the gammas are calculated during or even before the first step.
The TC can be implemented in hardware or in software. When implemented in hardware, the TC will generally run much faster than the TC implemented in software. However, implementing the TC in hardware is more expensive in terms of semiconductor surface area, complexity, and cost.
Calculating the lambdas of the whole trellis is very memory intensive. A very large number of alphas, betas and gammas must be stored.
Another prior art method is described in U.S Pat. No. 5,933,462 of Viterbi. This patent describes a soft decision output decoder for decoding convolutionally encoded code words. The decoder is based upon “generalized” Viterbi decoders and a dual maxima processor. The decoder has various drawbacks, such as, but not limited to the following drawbacks: The decoder either has a single backward decoder or two backward decoders. In both cases, and especially in the case of a decoder with one backward decoder, the decoder is relatively time consuming. In both cases, a learning period L equals a window W in which valid results are provided by backward decoder and forward decoder. Usually, L<W and the decoder described in U.S. Pat. No. 5,933,462 is not effective. Furthermore, at the end of the learning period an estimation of either a forward metric or backward metric are provided. Calculations that are based upon these estimations, such as the calculations of forward metrics, backward metrics and lambdas are less accurate than calculations that are based upon exact calculations of these variables.
The decoder described in U.S. Pat. No. 5,933,462 is limited to calculate state metrics of nodes over a window having a length of 2L, where L is a number of constraint lengths, 2L is smaller than block length T of the trellis.
There is a need to provide an improved device and method for performing high-accuracy SISO decoding that is not memory intensive. There is a need to provide a fast method for performing SISO decoding and provide an accelerating system for enhancing the performances of embedded systems.