I. Field of the Invention
The present invention relates to the field of information coding for communications systems, and more specifically to intersymbol interference cancellation and turbo coding.
II. Background
Transmission of digital data is inherently prone to noise and interference, which may introduce errors into the transmitted data. Error detection schemes have been suggested to determine as reliably as possible whether errors have been introduced into the transmitted data. For example, it is common to transmit data in packets and add to each packet a cyclic redundancy check (CRC) field, for example of a length of sixteen bits, which carries a checksum of the data of the packet.
When a receiver receives the data, the receiver calculates the same checksum on the received data and verifies whether the result of the calculation is identical to the checksum in the CRC field.
When the transmitted data is not used in real time, it is possible to request retransmission of erroneous data when errors are detected. However, reducing the transmission errors at the receiver reduces such requests, improving the efficiency of transmission. Moreover, when the transmission is performed in real time, such as, e.g., in telephone lines, cellular phones, remote video systems, etc., it is not possible to request retransmission.
Various forward error correction (FEC) coding techniques have been introduced to allow receivers of digital data to correctly determine the transmitted data even when errors may have occurred during transmission. For example, convolutional codes introduce redundancy into the transmitted data such that each bit is dependent on earlier bits in the sequence. Thus, when errors occur, the receiver can still deduce the original data by tracing back possible sequences in the received data. Moreover, the coded transmitted data may be packed into data packets.
To further improve the performance of a transmission channel, some coding schemes include interleavers, which rearrange the order of the coded bits in the packet. Thus, when interference destroys some adjacent bits during transmission, the effect of the interference is spread out over the entire original packet and can more readily be overcome by the decoding process. Other improvements may include multiple-component codes that encode the packet more than once, in parallel or in series. For example, it is known in the art to employ concatenated coding, and error correction methods that use at least two convolutional coders serially or in parallel. Such parallel encoding is commonly referred to as turbo coding.
For multiple-component codes, optimal decoding is often a very complex task, and may require large periods of time not usually available for real time decoding. Iterative decoding techniques have been developed to overcome this problem. Rather than determining immediately whether received bits are zero or one, the receiver assigns each bit a value on a multilevel scale representative of the probability that the bit is one. A common scale of such probabilities, referred to as log-likelihood ratio (LLR), represents each bit by a real number or, more commonly, an integer in some range, e.g., {−32,31}. A value of 31 signifies that the transmitted bit was a zero with very high probability, and a value of −32 signifies that the transmitted bit was a one, with very high probability. A value of zero indicates that the logical bit value is indeterminate.
Data represented on the multilevel scale is referred to as “soft data,” and iterative decoding is usually soft-in/soft-out, i.e., the decoding process receives a sequence of inputs corresponding to probabilities for the bit values and provides as output corrected probabilities, taking into account constraints of the code. Generally, a decoder that performs iterative decoding uses soft data from former iterations to decode the soft data read by the receiver. During iterative decoding of multiple-component codes, the decoder uses results from decoding of one code to improve the decoding of the second code. When serial encoders are used, two decoders may be used serially for this purpose. When parallel encoders are used, as in turbo coding, two corresponding decoders may conveniently be used in parallel for this purpose. Such iterative decoding is carried out for a plurality of iterations until it is believed that the soft data closely represents the transmitted data. Those bits that have a probability indicating that they are closer to one (for example, between 0 and 31 on the scale described above) are assigned binary zero, and the remaining bits are assigned binary one.
“Turbo coding” represents an important advancement in the area of FEC. There are many variants of turbo coding, but most types of turbo coding use multiple encoding steps separated by interleaving steps combined with the use of iterative decoding. This combination provides previously unavailable performance with respect to noise tolerance in a communications system. Namely, turbo coding allows communications at levels of energy-per-bit per noise power spectral density (Eb/N0) that were previously unacceptable using the existing forward error correction techniques.
Many communications systems use forward error correction techniques and therefore would benefit from the use of turbo coding. For example, turbo codes could improve the performance of wireless satellite links, in which the limited downlink transmit power of the satellite necessitates receiver systems that can operate at low Eb/N0 levels.
Digital wireless telecommunication systems, such as, e.g., digital cellular and PCS telephone systems, also use forward error correction. For example, the Telecommunications Industry Association has promulgated the over-the-air interface standard TIA/EIA Interim Standard 95, and its derivatives, such as, e.g., IS-95B (hereinafter referred to collectively as IS-95), which define a digital wireless communications system that uses convolutional encoding to provide coding gain to increase the capacity of the system. A system and method for processing radio-frequency (RF) signals substantially in accordance with the use of the IS-95 standard is described in U.S. Pat. No. 5,103,459, which is assigned to the assignee of the present invention and fully incorporated herein by reference.
Transmission of digital data is also inherently prone to errors caused by intersymbol interference (ISI). ISI is a common impairment introduced by the communication channel. To obtain reasonable bandwidth efficiency, the channel bandwidth is usually selected to be comparable to the channel (modulation) symbol rate. As a result, the channel impulse response must span more than one channel symbol. Hence, in addition to the component of the desired symbol, the sampled received signal usually contains contributions from multiple channel data symbols adjacent to the desired symbol. The interference caused by the adjacent symbols to the desired data symbol is called ISI. Multipath of a communication channel also introduces ISI.
If the aliased frequency spectrum of the received signal sampled at the symbol interval is a constant, ISI in the sampled received signal will be eliminated. Thus, one method to correct ISI is passing the received signal through a linear filter such that the sampled signal spectrum becomes a constant. Such a filter is conventionally called a linear equalizer. Methods known in the art for correcting ISI are known as equalization techniques. Well known equalizer techniques include the linear equalizer, decision feedback equalizer (DFE), and maximum likelihood sequence estimation (MLSE) equalizer.
It is well known that an optimal receiver front end that maximizes the received signal-to-noise ratio is a matched filter (MF) front end. If there is no ISI at the output of the matched filter, the receiver can achieve the optimum performance, called MF bound, over channels with additive Gaussian noise. Unfortunately, a matched filter usually also introduces ISI. As a result, an equalizer is usually needed to follow the MF front-end. If an equalizer is needed, the receiver will always have an inferior performance compared to the MF bound.
If the previous and future symbols of the current desired symbol are known, it is possible to attain the MF bound performance by subtracting out the ISI caused by these symbols. This technique is called ISI cancellation. Unfortunately, these symbols are usually not known and the ISI cancellation can only be implemented by using the estimates of these symbols. Thus, conventional ISI cancellation techniques are usually far from optimal, and even inferior to other equalization techniques.
There is an ongoing drive in the communications industry to continually improve coding gains. It has been found that combined maximum a posteriori (MAP) algorithms and turbo decoding outperforms ISI cancellation equalization techniques. However, the combined MAP and turbo decoding approach of improving coding gains is very complex, with the complexity of implementation increasing exponentially in relation to the number of channel taps, and according to the configuration of the channel symbol constellation.
It would be advantageous to attain the performance of combined MAP and turbo decoding techniques in a more simply realized manner by optimizing ISI cancellation techniques. ISI cancellation can be optimized by combining ISI cancellation with turbo decoding. Thus, there is a need for a reduced complexity combined turbo decoding and ISI cancellation method of improving communication channel coding gains, which can be simply realized.