The present invention relates generally to a turbo-code decoding apparatus and method. In particular, the invention relates to an improved turbo-code decoding apparatus and method for decoding a convolution encoded data frame using symbol-by-symbol trace back and an approximation into HR-SOVA to improve bit error rate (BER) performance with fewer iterations.
In 1948 Shannon published a paper entitled xe2x80x9cA Mathematical Theory of Communicationxe2x80x9d in the Bell System Technical Journal, Vol. 27, pp. 379-423 and pp. 623-656, 1948. Shannon showed that the system capacity, C, of a channel perturbed by additive white Gaussian noise power (AWGN) is a function of the average received signal power, S, the average noise power, N, and the bandwidth, W. The system capacity is given by the following formula:
C=W log2(1+S/N)
This equation promises the existence of error correction codes that enable information transmission over a noisy channel at any rate, R, where Rxe2x89xa6C, with an arbitrarily small error probability. Shannon""s work showed that the values of S, N, and W, set a limit on the transmission rate, not on the error probability.
In a subsequent paper entitled xe2x80x9cCommunication Theory of Secrecy Systems,xe2x80x9d Bell System Technical Journal, Vol. 28, pp. 656-715, 1949, Shannon showed that the maximum amount of information that can be transmitted over a noisy channel is limited by the following formula, known as the Shannon limit:
Eb/N0=W/C(2C/Wxe2x88x921)=xe2x88x921.59 db
where Eb represents the bit energy of the transmitted information bits; N0 represents a proportionality constant of the noise power N, such that N0=N/W; and Eb/N0 is the signal to noise ratio.
Shannon""s work established the benchmark by which the performance of all error control coding techniques, in terms of bit error rate (BER), are measured. However, it was not until the discovery of a new class of convolution codes that the BER of an error control coding technique came close to the Shannon limit. In the paper C. Berror, A. Glavieux, and P. Thitimajshima, xe2x80x9cNear the Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes(1),xe2x80x9d in Proc., IEEE Int. Conf. on Communications (Geneva, Switzerland, May 1993), p. 1064-1070, such a Turbo-Code encoder is described. The Turbo Code encoder, built using a parallel concatenation of two Recursive Systematic Convolution Codes (RSC) and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders. Simulation results of the Turbo-Code decoder achieved a Eb/N0=0.7 dB. However the hardware implementation of the decoder required a huge interleaver of 64,500 bits, 18 iterations to decode each symbol, and some ad hoc fine tuning factors of the updating rule.
In Joachim Hagenauer, Elke Offer, and Lutz Papke, xe2x80x9cIterative Decoding of Binary Block and Convolutional Codes,xe2x80x9d IEEE Trans. on Information Theory, 42(2):429-445, 1996, (Hagenauer et al) a decoder is described that accepts soft inputsxe2x80x94including a priori valuesxe2x80x94and delivers soft outputs that can be split into three terms: the soft channel, and a priori inputs, and the extrinsic value. The extrinsic value is used as an a priori value for the next iteration. The iterations are controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. The decoder applies a soft output Viterbi algorithm (SOVA) to decode turbo codes. SOVA modifies the conventional Viterbi algorithm to generate soft decisions, rather than hard decisions (0/1), based on the updating rule described in Hagenauer et al. Details regarding the Viterbi algorithm are described in xe2x80x9cError control systems for digital communication and storagexe2x80x9d by Stephen B. Wicker published 1995 by Prentice Hall, Inc., Englewood Cliffs, N.J. 07632, ISBN 0-13-200 809-2.
Hagenauer""s updating rule (typically called the HR rule) is a low complexity approximation algorithm for turbo decoding. Its lower complexity makes it suitable for hardware implementation. However the HR rule still requires significant resources of memory when an entire block or frame of data has to be processed with a single traceback step. In addition, an HR-SOVA decoder using the HR rule is sub-optimal due to the fact that the a priori or confidence value grows very rapidly (too rapidly) following each iteration. The HR rule erroneously overestimates the confidence value and thereby eliminates the opportunities to recognize and correct bit errors.
In Lang Lin and Roger S. Cheng, xe2x80x9cImprovements in SOVA-Based Decoding for Turbo Codes,xe2x80x9d In Proceedings of the ICC, 1997 (Lang et al), the authors proposed a modification to the HR-SOVA decoder to thereby limit the reliability/confidence values to a small range to compensate for the defect brought by overestimating those values in the original SOVA. However, it is unclear whether this fixed saturation technique creates an upper bound for an extrinsic likelihood value generated from the modified confidence values.
What is needed is an improved Turbo-code decoder that results in Bit Error Rate (BER) performance comparable to full-length traceback BER performance even when using small, fixed-length traceback (symbol-by-symbol traceback). A need also exists for a Turbo-code decoder that results in a steeper convergence in the initial few iterations, thereby allowing the use of fewer iterations for the same BER performance. Consequently, the use of fewer iterations reduces the time required for computing a result. A need also exists for a Turbo-code decoder that results in lower power consumption or can allow higher data rates to be processed at the same clock speeds when compared to fixed-saturation or non-saturation techniques. The technique must be robust enough to sustain a loss of the least significant bit (LSB) of the likelihood values without deterioration in BER performance to reduces storage requirements.
The present invention overcomes the problems associated with the prior art systems by disclosing a method and apparatus that (a) improves the Bit Error Rate (BER) performance of a priori SOVA (APRI-SOVA) based on the updating rule as described in Hagenauer et al (henceforth referred to as HR-SOVA); (b) does so in fewer iterations, hence impacting favorably on computational requirements; and (c) remains very suited for hardware implementation. The present invention introduces an approximation into delta and likelihood values produced by a decoder utilizing the HR-SOVA algorithm in the form of two new saturation algorithms resulting in an improved Turbo code decoder.
One embodiment of the present invention is an apparatus which includes a first SOVA decoder that generates a first path reliability value from a channel value and a parity symbol of the encoded frame, and a first a priori likelihood value. A first decorrelation unit then generates a first extrinsic symbol reliability value by decorrelating the channel value and the first a priori likelihood value from the first path reliability value. A first symbol reliability saturation unit saturates the first extrinsic symbol reliability value to generate a first saturated extrinsic symbol reliability value. A first interleaver interleaves the first saturated extrinsic symbol reliability value to generate a second a priori likelihood value for a second stage of the decoder. A second interleaver interleaves the channel value to generate an interleaved channel value for the second stage of the decoder. A second SOVA decoder then generates a second path reliability value from an interleaved parity symbol of the encoded frame, the second a priori likelihood value, and the interleaved channel value. A second decorrelation unit generates a second extrinsic symbol reliability value by decorrelating the interleaved channel value and the second a priori likelihood value from the second path reliability value. A second symbol reliability saturation unit saturates the second extrinsic symbol reliability value to generate a second saturated extrinsic symbol reliability value. A first de-interleaver de-interleaves the second saturated extrinsic symbol reliability value to generate the first a priori likelihood value as an input to the first SOVA decoder. The decoder is completed by a second de-interleaver that de-interleaves the second extrinsic symbol reliability value to generate a decoded message of the convolution encoded data frame.
In a further embodiment, the apparatus further includes a first path reliability saturation unit that saturates the first path reliability value received from the first SOVA decoder to generate a first saturated path reliability value as an input to the first decorrelation unit. A second path reliability saturation unit saturates the second path reliability value received from the second SOVA decoder to generate a second saturated path reliability value as an input to the second decorrelation unit and the second de-interleaver. Moreover, the saturation technique of the present invention is sufficiently robust to sustain the loss of the LSB of the first and second a priori likelihood values in the interleaver memories.
Advantages of the invention include ease of hardware implementation. Moreover, hardware implementations employing the saturation technique of the present invention benefit by having improved BER characteristics. Furthermore this saturation technique results in faster convergence, thereby requiring fewer decoding iterations to generate a result as compared to fixed-saturation techniques and non-saturating techniques. This can help reduce the power consumption or handle higher data rates as necessary. Compared to non-saturating schemes, the saturation technique of the present invention requires a smaller word size for the first and second extrinsic symbol reliability values, resulting in a reduced memory sizes for the interleaver memories.