Described below is a method for encoding a data message K′ for transmission from a sending station to a receiving station as well as a respective method for decoding, a respective sending station, a respective receiving station and respective software.
Construction of good low-density parity-check codes (LDPC codes) became recently one of the hottest research topics in coding theory. In spite of substantial progress in the asymptotical analysis of LDPC codes, as described in T. Richardson et al., “Design of capacity-approaching irregular low-density parity-check codes,” IEEE Transactions On Information Theory, vol. 47, no. 2, pp. 619-637, February 2001, constructing practical codes of short and moderate length still remains an open problem. The main reason for this is that density evolution analysis requires one to construct irregular LDPC codes, while most of the algebraic code design techniques developed up to now can produce only regular constructions, which inherently lack the capacity approaching behavior. Furthermore, the performance of practical length LDPC codes depends not only on the asymptotic iterative decoding threshold, but also on code minimum distance.
For high-rate codes, minimum distance appears to be the dominant factor. This allows one to solve the code construction problem using traditional techniques of coding theory, like finite geometries, difference sets, etc (see for example Y. Kou et al., “Low-density parity-check codes on finite geometries: A rediscovery and new results,” IEEE Transactions on Information Theory, vol. 47, no. 7, November 2001; B. Ammar et al., “Construction of low-density parity-check codes based on balanced incomplete block designs,” IEEE Transactions on Information Theory, vol. 50, no. 6, June 2004, and S. J. Johnson et al., “Resolvable 2-designs for regular low-density parity-check codes,” IEEE Transactions on Communications, vol. 51, no. 9, September 2003). But for lower rate codes (e.g. rate ½) one has to find a trade-off between the minimum distance and iterative decoding threshold. This is typically achieved by performing some kind of conditioned random search, as described in X.-Y. Hu et al., “Regular and irregular progressive edge-growth Tanner graphs,” IEEE Transactions on Information Theory, vol. 51, no. 1, January 2005; T. Tian et al., “Selective avoidance of cycles in irregular LDPC code construction,” IEEE Transactions On Communications, vol. 52, no. 8, August 2004; and Hua Xiao et al., “Improved progressive-edge-growth (PEG) construction of irregular LDPC codes,” IEEE Communications Letters, vol. 8, no. 12, pp. 715-717, December 2004. Codes obtained with these methods exhibit very good performance, but are very difficult to implement due to absence of any structure. One way to solve this problem is to modify a structured regular parity check matrix so that it becomes irregular, as described in Dale E. Hocevar, “LDPC code construction with flexible hardware implementation,” in Proceedings of IEEE International Conference on Communications, May 2003, pp. 2708-2711 and Jingyu Kang et al., “Flexible construction of irregular partitioned permutation LDPC codes with low error floors,” IEEE Communications Letters, vol. 9, no. 6, June 2005. However, there is no formalized way to perform such modification, and many trials have to be performed before good code is found.
LDPC codes represent a sub-class of linear codes. Any codeword c of a linear code satisfies the equation HcT=0, where H is a parity check matrix of the code. On the other hand, for a given data message K the corresponding codeword can be obtained as c=KG, where G is a generator matrix of the code. This implies that the parity check matrix H and the generator matrix G must satisfy the equation HGT=0. For a given parity check matrix H, it is possible to construct many different generator matrices G. It is always possible (probably after applying a column permutation to the check matrix H) to construct a generator matrix in the form G=[A I], where I is the identity matrix, and A is some other matrix. In this case the codeword corresponding to a data message K looks like c=KG=[KA K], i.e. the data message K appears as a sub vector of the codeword. This is known as systematic encoding. The advantage of systematic encoding is that as soon as all channel errors are removed from the codeword, the data message can be immediately extracted from it, without any post-processing. In some cases it is possible to construct other methods for systematic encoding of data messages, more efficient than multiplication by the generator matrix in systematic form. However, the encoding method does not affect the error correction capability of the code.
It is also true that if H is a parity check matrix of some linear code, then product SH is also a parity check matrix for the same code, where S is a non-singular matrix. In particular, if S is a permutation matrix, and H is a low-density matrix, running the belief propagation LDPC decoding algorithm, as described in W. E. Ryan, “An introduction to LDPC codes,” in CRC Handbook for Coding and Signal Processing for Recording Systems, B. Vasic, Ed. CRC Press, 2004, over SH matrix would give exactly the same results as for the case of the original H matrix.
Many LDPC code constructions are based on expansion of a template matrix P by substituting given entries pij with p×p permutation matrices, where p is the expansion factor. Using the construction according to FIGS. 2 to 8, codes with different length but the same rate can be generated starting from a single template matrix. More specifically, their parity check matrix is given by two sub-matrices, the first one, Hz, is a double diagonal matrix implementing the so-called zigzag pattern, and the second one, Hi, is constructed based on the expansion of a template matrix, i.e. by substituting its entries with permutation and zero matrices. By changing the expansion factor, codes of different lengths are obtained. Obviously, this requires employing different permutation matrices. Despite of the simplicity of this operation, changing the expansion factor, requires a re-routing of the edges in the Tanner Graph corresponding to the parity check matrix. Since the Tanner Graph structure is implemented for the so called “message passing” or “belief propagation” decoding at the receiver, a change in its structure can afford significant hardware complexity when considering the whole code family.
A possibility to obtain longer codes from shorter codes is by using concatenated coding, as described in the B. Ammar et al. article cited above. However, this usually changes also the code rate. Furthermore, LDPC codes are known to perform not so well if employed in concatenated code constructions. The reason is that it is extremely difficult to obtain a good node degree distribution in the Tanner graph of the concatenated code.