The purpose of any communication system is to provide the framework which allows the transmission of information from an information source to a destination via a communication channel. To improve performance, coverage, and efficiency, modern communication systems utilize digital signaling techniques. In general, the information is randomly distorted during transmission as a result of a variety of possible factors. Among these factors are attenuation, nonlinearities, bandwidth limitations, multipath propagation, and noise. Depending on the type and the degree of such distortions, the transmitted information may be received incorrectly. In a digital system, this translates to an abundance of bit errors. If these bit errors go unchecked, then the system would become practically useless. The goal of the designer of a digital communication system is thus to provide a cost-effective facility which improves the reliability and quality of such transmission systems.
At an acceptable data rate, one method of improving the error performance between two users is to increase the power of the transmitted signal. In such cases, the receiver can more easily determine the content of the communicated message. However, there can be substantial costs which motivate the need to consider other methods of improving communications. One cost is the resulting interference to other users within a system, for the case of multiuser system. While the intended target receives the message more easily, the amplified message may interfere with the communications between other users. Another cost is that, in the case of a mobile user, expending too much power on signal transmission will result in a shorter battery life. In addition, each transmitter has a limitation on its average power.
With digital signaling, an alternative way to mitigate the effects of channel distortion is to use error-control coding. Error-control coding for data integrity may be exercised by means of forward error correction (FEC). Essentially, by introducing structural redundancy in the data stream, the receiver can reduce its susceptibility to channel distortion. Hence, to achieve the same performance criteria, error-control coding can provide several decibels (dB) of signal-to-noise ratio (SNR) gain over uncoded systems. It has been said that each decibel is worth one million dollars. In some cases, the gain of error control coding over the uncoded case can exceed 8 decibels.
The approach to error correction coding taken by modern digital communication systems started in the late 1940's with the ground breaking work of Shannon, Hamming, and Golay. In his paper, Shannon set forth the theoretical basis for coding which has come to be known as information theory. By mathematically defining the entropy of an information source and the capacity of a communications channel, he showed that it was possible to achieve reliable communications over a noisy channel provided that the source's entropy is lower than the channel's capacity. In other words, Shannon proved that every noisy channel has a maximum rate at which information may be transferred through it and that it is possible to design error-correcting codes that approach this capacity, or Shannon limit, provided that the codes may be unbounded in length. This came as a surprise to the communications community which at the time thought it impossible to achieve both arbitrarily small error probability and a nonzero data transmission rate. Shannon did not explicitly state how to design codes. For the last six decades, the construction of capacity-approaching coding schemes that are easy to encode and decode has been the supreme goal of coding research. Various types of coding schemes have been used over the years for error correction purposes. In the last decade, a breakthrough was made in this field with the discovery of some practical codes and decoding algorithms which approach considerably the ultimate channel capacity limit. There are two large classes of such codes; Turbo codes and low-density-parity (LDPC) codes.
Turbo codes are obtained by parallel or serial concatenation of two or more component codes with interleavers between the encoders. The component codes are mainly simple convolutional codes. Therefore, it is easy to construct and encode turbo codes. As mentioned, an interleaver is required to permute the input information sequence. It is shown that the larger the interleaver size, the better the performance of turbo codes. On the other hand, large interleaver causes large decoding delay. In decoding of a turbo code, each component code is decoded with a trellis based algorithm. Therefore, for practical implementations, only codes with simple trellises can be used as component codes of turbo codes. However, codes with simple trellises normally have small minimum distances, causing the error floor at medium to high SNR. In turbo decoding, at each decoding iteration the reliability values and estimates are obtained only for the information bits. Thus no error detection can be performed to stop the decoding iteration process. The only way to stop the decoding is to test the decoding convergence, which is usually complex. No error detection results in poor block error rate and slow termination of iterative decoding.
LDPC codes are block codes and were discovered by Gallager in early 1960's. After their discovery, they were ignored for a long time and rediscovered recently. It has been proved that LDPC codes are good, in the sense that sequences of codes exist which, when optimally decoded, achieve arbitrarily small error probability at nonzero communication rates up to some maximum rate that may be less than the capacity of the given channel. Numerous simulation results showed that long LDPC codes with iterative decoding achieve outstanding performance. Until recently, good LDPC codes were mostly computer generated. Encoding of these computer generated codes is usually very complex due to the lack of understanding their structure. On the other hand, iterative decoding for LDPC codes is not trellis based, and it is not required for LDPC codes to have simple trellises. Thus, their minimum distances are usually better that those of turbo codes. For this reason, LDPC codes usually outperform turbo codes in moderate to high SNR region and exhibit error floor at lower error rates. Another advantage of LDPC codes over turbo codes is that their decoding algorithm provides reliability values and estimates for every code bit at the end of each iteration, enabling error detection. The decoding iteration process is stopped as soon as the estimated sequence is detected as a codeword. Therefore, LDPC codes normally provide better block error performance and faster termination of the iterative decoding.
LDPC codes are becoming the standard for error control for a wide range of applications in many communication and digital storage systems where high reliability is required. Medium-rate LDPC codes are used in standards such as DVB-S2, WiMax (IEEE 802.16e), and wireless LAN (IEEE 802.11n). Furthermore, high-rate LDPC codes have been selected as the channel coding scheme for mmWAve WPAN (IEEE 802.15.3c).
While encoding efficiency and high data rates are important, for an encoding and/or decoding system to be practical for use in a wide range of devices, e.g., consumer devices, it is important that the encoders and/or decoders be capable of being implemented at reasonable cost. Accordingly, the ability to efficiently implement encoding/decoding schemes used for error correction and/or detection purposes, e.g., in terms of hardware costs, can be important.
LDPC codes can be decoded with various decoding methods, ranging from low to high complexity and from reasonably good to very good performance. These decoding methods include hard-decision, soft-decision, and hybrid decoding schemes. From an implementation point of view, hard-decision decoding is the simplest in complexity; however, its simplicity results in a relatively poor performance that can be as far away as a few decibels from that of soft-decision decoding. Soft-decision decoding provides the best performance but requires the highest computational complexity. Hybrid decoding is in between the two extremes and provides a good trade-off between performance and complexity.
Among hybrid decoding algorithms, the soft reliability-based iterative majority-logic decoding (SRBI-MLGD) algorithm offers one of the best trade-off between performance and complexity. The algorithm is designed based on the orthogonal check-sums concept used in the one-step majority-logic-decoding (OSMLGD) algorithm. The decoding function of the SRBI-MLGD algorithm includes some kind of soft reliability measures of received symbols which are improved through decoding iterations. Simulation results show that SRBI-MLGD performs just as well as many variants of weighted bit-flipping (WBF) algorithms and the differential binary message-passing decoding (DBMPD) algorithm, with much less decoding complexity. Furthermore, it has a faster rate of decoding convergence. An important feature of SRBI-MLGD is that it is a binary message-passing algorithm and requires only logical operations and integer additions. This feature significantly simplifies the decoder implementation, as it can be achieved using simple combinational logic circuits. Another feature of SRBI-MLGD is that it allows parallel decoding of all the received symbols in each decoding iteration, which is important to achieve a very high decoding speed. However, the performance of SRBI-MLGD is still far away from that of soft-decision decoding algorithms.
Accordingly, a need exists for a method of decoding LDPC codes with reduced complexity while outperforming soft-decision decoding algorithms in terms of bit-error rate (BER) or frame-error rate (FER) performance. These needs and others are met with the present invention, which overcomes the drawbacks and deficiencies of previously developed LDPC decoding algorithms.