Data communication systems comprise three components: a transmitter; a transmission channel; and a receiver. Transmitted data become altered due to noise corruption and channel distortion. To reduce the presence of errors caused by noise corruption and channel distortion, redundancy is intentionally introduced, and the receiver uses a decoder to make corrections. In modern data communication systems, the use of error correction codes plays a fundamental role in achieving transmission accuracy, as well as in increasing spectrum efficiency. Using error correction codes, the transmitter encodes the data by adding parity check information and sends the encoded data through the transmission channel to the receiver. The receiver uses the decoder to decode the received data and to make corrections using the added parity check information.
Low Density Parity Check (LDPC) codes were first disclosed by Gallanger in the early 1960's, R. G. Gallager: “Low Density Parity Check Codes”, Cambridge, Mass.: MIT Press, 1963. LDPC codes are linear codes which have been found to be capable of error correcting performance close to the Shannon limit, as disclosed in D. J. C. MacKay and R. M. Neal: “Near Shannon limit performance of low density parity check codes”, Electron. Lett., vol. 32, no. 18, pp. 1645-1646. Shortly after the development of Turbo codes researchers noticed that existing graphical representations such as Bayesian networks and factor graphs are unifying frameworks for LDPC decoding using a Sum Product (SP) process involving message passing over the edges of a factor graph, as disclosed in F. Kschischang, B. Frey, and H. Loeliger: “Factor graphs and the sum product algorithm”, IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 498-519, February 2001. Unfortunately, hardware implementations of LDPC decoders based on this process are highly complex and costly.
Reed-Solomon (RS) codes are non-binary linear block codes whose symbols are chosen from a Galois Field (GF). Good minimum distance of RS codes together with their non-binary nature result in good bit and burst error-correcting performance. RS codes are employed in a wide spectrum of applications including magnetic recording, media transmission and satellite communication. Methods for RS decoding are generally classified into Hard Decision Decoding (HDD) and Soft Decision Decoding (SDD) methods. In many existing applications algebraic HDD is used for RS decoding. However, algebraic HDD methods are not able to use “soft” information provided by maximum a posteriori or turbo decoders. Iterative decoding based on the SP process has been used in LDPC decoding. When iterative decoding is applied to a code with high-density parity check matrix, as is the case for RS codes, it is likely that the iterative decoding becomes locked at local minimum points—pseudo-equilibrium points—that do not correspond to a valid codeword. Based on this observation, a method disclosed in J. Jiang and K. R. Narayanan: “Iterative soft-input soft-output decoding of Reed-Solomon codes”, IEEE Trans. Inform. Theory, vol. 52, no. 8, pp. 3746-3756, August 2006, adapts the parity check matrix during each SP process iteration step according to reliabilities such that the columns of the adapted parity check matrix corresponding to the unreliable bits are sparse. The SP process is then applied to the adapted parity check matrix. It was shown that this adaptation technique prevents the SP process from becoming locked at pseudo-equilibrium points and improves the convergence of the decoding process. Unfortunately, existing hardware implementations of RS decoders based on this process are highly complex and costly.
Stochastic computation has been introduced in the 1960's as a method to design low precision digital circuits. Stochastic computation has been used, for example, in neural networks. The main feature of stochastic computation is that probabilities are represented as streams of digital bits which are manipulated using simple circuitry. Its simplicity has made it attractive for the implementation of error correcting decoders in which complexity and routing congestion are major problems, as disclosed, for example, in W. Gross, V. Gaudet, and A. Milner: “Stochastic implementation of LDPC decoders”, in the 39th Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, Calif., November 2005.
A major difficulty observed in stochastic decoding is the sensitivity to the level of switching activity—bit transition—for proper decoding operation, i.e. switching events become too rare and a group of nodes become locked into one state. To overcome this “latching” problem, C. Winstead, V. Gaudet, A. Rapley, and C. Schlegel: “Stochastic iterative decoders”, in Proc. of the IEEE Int. Symp. on Information Theory, September 2005, pp. 1116-1120, teach “packetized supemodes” which prevent correlation between messages. A supemode is a special node which tabulates the incoming stochastic messages in histograms, estimates their probabilities and regenerates uncorrelated stochastic messages using random number generators. Unfortunately, the introduction of supemodes diminishes the advantages of the stochastic computation by necessitating complex hardware for implementing the supemodes. In addition to supemodes, C. Winstead: “Error control decoders and probabilistic computation”, in Tohoku Univ. 3rd SOIM-COE Conf., Sendai, Japan, October 2005, teaches scaling of channel LLRs to a maximum value to ensure the same level of switching activity for each block.
Unfortunately, these methods provide only limited performance when decoding state-of-the-art LDPC and RS codes on factor graphs.
It would be desirable to provide a method for iterative stochastic decoding of state-of-the-art LDPC and RS codes on factor graphs, which overcomes at least some of the above-mentioned limitations.