This invention relates generally to communications systems. More specifically, the present invention relates to methods for decoding data in digital communications systems.
FIG. 1 depicts a block diagram of a parallel concatenated recursive systematic convolutional coder 100 known in the art. Coder 100 falls into the class of coders known as xe2x80x9cturbo coders.xe2x80x9d Coder 100 includes a first constituent encoder 102, a second constituent encoder 104, a multiplexer 106, and a buffer interleaver 110. Data to be encoded by coder 100 is input to multiplexer 106 and to buffer interleaver 110. An output of multiplexer 106 is input to a recursive convolutional encoder 108. Similarly, an output of buffer interleaver 110 is input to a recursive convolutional encoder 108xe2x80x2. An INTERMEDIATE OUTPUT of recursive convolutional encoder 108 is also input to multiplexer 106. A control signal, xe2x80x9cTERMINATION CONTROL,xe2x80x9d selects which input of multiplexer 106 is output to convolutional encoder 108 and to a systematic first-in-first-out queue (xe2x80x9cFIFOxe2x80x9d) 112. An output of systematic FIFO 112 generates a first one of four outputs of coder 100, SYSTEMATIC-1. An ENCODER OUTPUT is input to a parity first-in-first-out queue (xe2x80x9cFIFOxe2x80x9d) 114. An output of parity FIFO 114 generates a second one of four outputs of coder 100, PARITY-1.
Continuing with recursive convolutional encoder 108, the output of multiplexer 106 is input to an input of an exclusive OR (XOR) gate 116. A second input of XOR gate 116 receives the INTERMEDIATE OUTPUT of recursive convolutional encoder 108. An output of XOR gate 116 is input to a first bit latch 118. An output of bit latch 118 is input to a second bit latch 120. An output of bit latch 120 is input to an XOR gate 122 and to an XOR gate 124. A second input of XOR gate 122 receives the output of XOR gate 116. A second input of XOR gate 124 receives the output of bit latch 118. An output of XOR gate 124 generates the INTERMEDIATE OUTPUT of recursive convolutional encoder 108.
Continuing with recursive convolutional encoder 108xe2x80x2, the output of buffer interleaver 110 generates a third one of four outputs of coder 100, SYSTEMATIC-2. In one embodiment of coder 100, recursive convolutional encoder 108xe2x80x2 is identical to recursive convolutional encoder 108. In this case, the final output of coder 100, PARITY-2, is generated by an XOR gate 122xe2x80x2 (not shown). As described below, recursive convolutional encoder 108xe2x80x2 may differ from recursive convolutional encoder 108.
Continuing with the operation of coder 100, a data stream to be encoded is input to both recursive convolutional encoders. These encoders create redundant information (PARITY-1, PARITY-2) that is transmitted in addition to the original data stream (SYSTEMATIC-1, SYSTEMATIC-2). Buffer interleaver 110 reorders the input data stream prior to the operation of encoder 108xe2x80x2.
Later, a decoder receiving the data stream uses the systematic and parity data to recreate the most likely original data stream based on the polynomials instantiated in the recursive convolutional encoders. Known methods of decoding turbo coded data streams have at least two disadvantages.
First, the calculation of a possible data stream from its parity and systematic data streams is dependent upon the starting state of the recursive convolutional encoders used by the coder. In a turbo coding scheme, these calculations are made in a forward direction (first data bit to last data bit) and in a backwards direction (last data bit to first data bit) to improve the accuracy of the output. Both of these calculations require knowledge of the initial (or final) state of the recursive convolutional encoders. Otherwise, the accuracy of the data stream will be limited during the initial (or final) bit calculations. In the prior art, it is known to initially set the recursive convolutional encoders to a known state. This scheme addresses the forward calculation problem. However, the determination of the final state is more difficult. Multiplexer 106 may be used to input predetermined data into recursive convolutional encoder 108 after processing the data stream. This data forces the encoder to a final known state. Unfortunately, buffer interleaver 110 rearranges these bits in the data stream prior to input to encoder 108xe2x80x2. Consequently, encoder 108xe2x80x2 ignores these final inputs. Encoder 108xe2x80x2 may include a multiplexer 106xe2x80x2 (not shown) analogous to multiplexer 106 to force the final state of recursive convolutional encoder 108xe2x80x2 to a known state. This latter approach requires that additional systematic and parity data be sent to the encoder from encoder 108xe2x80x2 and that this data be processed by the decoder.
Second, a turbo coding scheme decodes a block of data by iteratively processing the data through two sequential maximum-a-posteriori (xe2x80x9cMAPxe2x80x9d) decoders. The data is interleaved between the two sequential MAP decoders to reflect the effect of buffer interleaver 110. During each iteration, one MAP decoder uses a soft decision (intermediate guess) of the other MAP decoder in its current calculation. Specifically, the calculation of a branch metric (xcex3) is the conditional probability that a particular transition occurred from one state to another state, assuming the starting state was the correct state, and given the received systematic and parity bits. According to known techniques, this calculation requires three operands: (1) the soft decision based on the previous iteration, (2) the probability the received systematic bit is a one or a zero, and (3) the probability the received parity bit is a one or a zero. The requirement of three operands in this repetitive calculation significantly increases the processing requirement of the MAP decoder, whether instantiated in hardware or as a software routine.