Generally, Shannon capacity is a limit value of a transmission data rate to secure reliable communications for a channel having various noises. Many efforts have been made to meet channel code performance approaching a limit of channel capacity and such a channel code condition as encoding and decoding complexity, decoding speed and the like.
A next generation mobile communication system needs a fast data rate and reliable data transfer. For this, a powerful channel-encoding scheme is necessary. A turbo encoder is configured in a manner that two recursive type convolutional encoders are connected parallel to each other to leave an interleaver in-between. Since channel capacity of the turbo codes has excellent performance close to that of Shannon capacity, the turbo codes can become channel codes suitable for the next generation mobile communication system.
FIG. 1 is a block diagram of a general turbo encoder according to a related art. Referring to FIG. 1, a first constituent encoder 1 receives information bits and then generates first redundant bits by encoding the received information bits. A second constituent encoder 3 receives information bits interleaved by an encoder interleaver 2 and then generates second redundant bits by encoding the encoded information bits. In case of turbo encoding, the first or second redundant bits are parity bits.
For highly reliable decoding by a turbo decoder, each constituent encoder of a turbo encoder adopts a scheme of forcing trellis to be terminated by inserting tail bits or a circular coding scheme. In the circular coding scheme, an initial state of trellis is set equal to a final state of the trellis. Since there is no additional bit insertion, the circular coding scheme provides band efficiency higher than that of the tail-bit inserting scheme. In this case, the initial and final states are named a circular state. In order to determine the circular state, an initial state is set to a zero state and the final state is then found by performing encoding. So, it is able to calculate the circular state using the final state and a specific encoder configuration. And, it is also able to generate redundant bits by re-executing encoding by setting the calculated circular state to the initial state.
FIG. 2 is a block diagram of a turbo decoder in accordance with a related art. Referring to FIG. 2, an information bit sequence, a first redundant bit sequence and a second redundant bit sequence outputted from the encoder shown in FIG. 1 are modulated and then transmitted to a receiver for a turbo encoder shown in FIG. 2.
The receiver receives the modulated bit sequences, demodulates the received bit sequences, and then supplies the demodulated bit sequences to the turbo decoder shown in FIG. 2. A first constituent decoder 4 receives the demodulated information bits and the demodulated first redundant bits, performs a decoding process on the received bits, and then calculates extrinsic information of the first constituent decoder 4.
A decoder interleaver 5′ interleaves the extrinsic information of the first constituent decoder 4 and then inputs the interleaved extrinsic information to a second constituent decoder 6. The second constituent decoder 6 receives the demodulated information bits, the demodulated second redundant bits and then the first constituent decoder's extrinsic information interleaved by the decoder interleaver 5, performs a decoding process on the received bits and information, and then calculates extrinsic information of the second constituent decoder 6.
A decoder deinterleaver 7 rearranges the extrinsic information of the second constituent decoder 6 and then inputs the rearranged extrinsic information to the first constituent decoder 4. The above explanation is a description of a decoding process corresponding to an iterative decoding count of 1. And, the iterative decoding process keeps proceeding, unit specific decoding performance is achieved.
Thus, the turbo decoder consists of a plurality of constituent decoders. And, internal elements of each of the constituent decoders calculate various kinds of metrics to perform a decoding operation. These metrics are classified into a transition metric, a forward metric and a backward metric. And, a log likelihood ratio (hereinafter abbreviated LLR) of information bits and the like are calculated based on the forward and backward metrics.
FIG. 3A and FIG. 3B are diagrams for forward and backward metric operational methods in a turbo decoder according to a related art, respectively.
Referring to FIG. 3A, a kth forward metric αk is calculated from a (k−1)th forward metric αk−1 and a (k−1)th transition metric γk−1. And, a kth backward metric βk is calculated from a (k+1)th backward metric βk−1 and a (k+1)th transition metric γk+1.
Although a general turbo decoder needs to calculate both forward and backward metrics to decide transferred information bits, since a forward metric calculating process sequentially proceeds from a first information bit to an Nth information bit and a backward metric calculating process sequentially proceeds from the Nth information bit to the first information bit, there occurs a delay amounting to an entire information frame length. Besides, since the turbo decoder performs the iterative decoding, it is unable to avoid iterative operations of the forward and backward metrics. So, such a decoding delay is regarded as a disadvantage of the turbo code decoder.
To solve the above disadvantage, various decoding schemes, each of which consists of the steps of dividing a frame length (N) of information bits into nsub sub-blocks and performing parallel decoding on each of the sub-blocks, have been proposed.
As representative decoding schemes, there are a scheme of inserting tail bits per a sub-block and a scheme of performing an additional metric calculation by leaving guard windows in front and rear of trellis of a sub-block in calculating forward and backward metrics in a sub-block decoding process.
These parallel decoding schemes are advantageous in that a decoding delay is reduced about 1/nsub time smaller than that of a conventional decoding scheme. Yet, they are disadvantageous in that tail bits should be inserted per sub-block or an additional metric calculation needs to be done to prevent metric reliability from being lowered in decoding each sub-block. This is because the tail bit insertion decreases a data rate or the additional metric calculation execution increases decoding complexity. Moreover, these parallel decoding schemes accelerate performance degradation as a length of sub-block gets smaller, i.e., the nsub gets larger.
FIG. 4A and FIG. 4B are diagrams for forward and backward metric operational methods in a turbo code decoding apparatus adopting a circular decoding scheme, respectively.
Referring to FIG. 4A and FIG. 4B, in case of a forward metric operation, if a forward metric αN(i) in a final state is calculated in an ith iterative decoding, it is used as a forward metric α0(i+1) in an initial state in an (i+1)th iterative decoding. In case of a backward metric operation, if a backward metric βN(i) is calculated in an ith iterative decoding, it is used as a backward metric βN(i+1) in a final state in an (i+1)th iterative decoding. Through theses methods, a sequential calculation process of forward and backward metrics can continue without interruption, whereby metric reliability can be gradually raised.