The present application relates to error control systems in data transmission, especially to iterative decoding using parallel concatenated encoders, such as turbo decoders.
Background: Signals and Noisy Channels
No communications channel is perfect. In the real world, every communications channel includes some “noise” (unpredictable non-signal inputs). The amount of noise is usually quantified by the signal-to-noise ratio, or “SNR”, which is stated in decibels. (For example, if the signal power is one hundred times the noise power, the SNR would be about 20 dB.) Higher SNRs are more desirable, but the SNR (at a given transmission rate) is set by the physics of the channel, as well as by the design of the transmitter and receiver. Even though digital signals are generally less susceptible to degradation than analog signals, many channels include enough noise to induce some errors in a digital signal. In many applications a desired data rate can easily be achieved at reasonable cost, but the worst-case level of bit errors with such a configuration is excessive. In such cases coding techniques can be used to reduce the error rate (at the cost of a slightly reduced gross data rate).
Similar techniques can also be used with data storage architectures. In this case, the “channel” is the storage medium and the interfaces to it. For example, in a hard disk drive the write head is an analog driver stage which may not be perfectly aligned to the desired radial position on the disk. Data can be degraded, during writing, storage, or reading, by factors such as mistracking, overtemperature, particulate contamination, or mechanical failure.
Background: Error-Control Coding
Coded digital communication systems use error control codes to improve data reliability at a given signal-to-noise ratio (SNR). For example, an extremely simple form (used in data storage applications) is to generate and transmit a parity bit with every eight bits of data; by checking parity on each block of nine bits, single-bit errors can be detected. (By adding three error-correction bits to each block, single-bit errors can be detected and corrected.) In general, error control coding includes a large variety of techniques for generating extra bits to accompany a data stream, allowing errors in the data stream to be detected and possibly corrected.
A famous existence proof (Shannon's noisy channel coding theorem) states that error rates can be made as low as desired by using optimal error control codes. This theorem did not itself tell how to derive the optimal error control coding, and for decades no coding schemes were able to come close to the theoretical limits defined by this theorem. However, a major breakthrough was achieved in 1993, when turbo codes were introduced.
Background: “Turbo” Coding
The encoder side of a turbo coding architecture typically uses two encoders, one operating on the raw data stream and one on a shuffled copy of the base data stream, to generate two parity bits for each bit of the raw data stream. The encoder output thus contains three times as many bits as the incoming data stream. This “parallel concatenated encoder” (or “PCE”) configuration is described in detail below.
The most surprising part of turbo coding was its decoding architecture. The decoder side invokes a process which (if the channel were noiseless) would merely reverse the transformation performed on the encoder side, to reproduce the original data. However, the decoder side is configured to operate on soft estimates of the information bits and refines the estimates through an iterative reestimation process. The decoder does not have to reach a decision on its first pass, but is generally allowed to iteratively improve the estimates of the information bits until convergence is achieved. (A more detailed description of this is provided below.)
One drawback to the turbo decoder is that some received codewords require many iterations to converge. The ability to improve estimates by iteration is a great strength, since it means that the decoding performance can be nearly optimal; but every iteration requires additional time, computing resources, and/or battery energy. Thus in practice it is desirable to find a way to stop turbo decoding as soon as convergence is achieved. Worst-case channel conditions may require many more iterations than do best-case conditions, and it is desirable to find some way to adjust the number of iterations to no more than required.
Background: Stopping Criteria for Turbo Decoders
Originally, turbo decoders would execute a specific number of iterations regardless of the number of errors inserted by the channel. This was inefficient, since fewer iterations are needed when there are fewer errors in the incoming signal. Efforts have therefore been made to develop adaptive stopping criteria, which would provide an indicator of when the turbo decoding iterations can be stopped.
U.S. Pat. No. 5,761,248 describes adaptive stopping based on cross entropy, or convergence of the output bits: when the decoder's digital output bit estimates (for some block of data) stop changing, the decoding process for that block is halted. However, this stopping criterion requires at least two iterations of the decoding process (in order to generate enough outputs to compare to one another). This can be a waste of processing effort in systems where the channel introduces few errors, or where there is a large amount of data to be decoded. Other stopping criteria, based on the cross entropy criterion, abort decoding based on the ratio of sign changes in the extrinsics to the number of extrinsics.
Another attempt at adaptive stopping was described in Robertson, “Illuminating the Structure of Code and Decoder of Parallel Concatenated Recursive Systematic (Turbo) Codes,” 1994 GLOBECOM Proceedings 1298, which suggests that the VARIANCE of the estimates can be used for a stopping criterion.
Turbo Decoding with Improved Adaptive Stopping
The present application teaches an improvement to a stopping criterion presented in commonly-owned U.S. provisional application No. 60/179,055, filled Jan. 31, 2000, which has the same effective filing date as the present application, and which also has overlapping inventorship with the present application, and which is hereby incorporated by reference for all purposes.
That application teaches an innovative method of aborting the decoding iteration process. Instead of a comparison between successive outputs of the MAP decoders, an absolute indication of the decoded signal quality is compared with a threshold signal quality. The decoders generate soft outputs, or estimates of the original transmitted data symbols. With each decode, these extrinsic values diverge (or their absolute value increases), which indicates greater certainty of the value of the transmitted bit. The values of the extrinsics are used to compute an estimate of the overall signal quality. The mean and variance of the extrinsic distribution are used to compute the estimate. When the signal quality reaches the desired performance threshold, iterations are aborted.
Implementation of the comparison of the decoded signal quality to the threshold requires 3 divisions, 1 multiplication, 1 subtraction, and 1 comparison. All these functions are easily built in a hardware solution except the division function. Typically, division functions A/B are implemented as A×1/B, with 1/B implemented as a lookup table. This solution only works if the range of numbers for B is relatively small. Unfortunately, the previous solution for comparing the SNR required division by N (the number of extrinsics), which can be very large, often ranging from 320 to 5120. This division also has numerators that can range from 102 to 106. The range for the required divisions is therefore many orders of magnitude.
The present application teaches a stopping criterion implementation that does not require division by a variable. By manipulating the comparison equation, the previously required division functions are replaced by multiplications. This innovation greatly simplifies hardware implementation and improves processing speed.