1. Field of the Invention
The present invention relates to communication networks and, more particularly, to algebraic low-density parity check code design for variable block sizes and code rates.
2. Description of the Related Art
Data communication networks may include various computers, servers, nodes, routers, switches, bridges, hubs, proxies, and other network devices coupled to and configured to pass data to one another. These devices will be referred to herein as “network elements.” Data is communicated through the data communication network by passing protocol data units, such as Internet Protocol packets, Ethernet Frames, data cells, segments, or other logical associations of bits/bytes of data, between the network elements by utilizing one or more communication links between the devices. A particular protocol data unit may be handled by multiple network elements and cross multiple communication links as it travels between its source and its destination over the network.
In wireless networks, especially wireless local area networks, data transmission may occasionally, or frequently, encounter bit errors. In some instances, the transmission errors may be relatively high. To enable accurate transmission of data under these transmission characteristics, it is common to use forward error correction to enable the data to be extracted from the signal even if the signal is corrupted.
Forward error correction is a technique of adding redundancy (parity check bits) to transmitted information so that received information can be recovered even in the presence of noise. Codes that implement forward error correction have varying degrees of complexity and effectiveness.
Depending on the transmission characteristics of the wireless network and the complexity of the code, fewer or greater numbers of parity check bits may need to be inserted into the data stream. The number of data bits out of the total number of transmitted bits will be referred to herein as the “code rate.” For example, if 1000 parity bits are injected for every 1000 bits of data, the code rate=1000/(1000+1000)=½. Similarly, if 500 parity bits are injected for every 1000 bits of data, the code rate=1000/(1000+500) or ⅔. Thus, an increased code rate results in the transmission of fewer parity check bits and hence reduces the overhead associated with transmission on the wireless network. On a transmission medium such as a wireless transmission medium, reducing the transmission overhead directly results in an increase in usable available bandwidth on the network. Thus, optimization of the parity check code to enable higher code rates is one way to increase throughput on a wireless network.
Injection of parity bits into the data stream and the use of these parity bits is controlled by an error correction code, the complexity of which depends greatly on the selection of a parity check matrix “H.” As the parity check matrix increases in complexity, the processing required on the end systems increases. Low density parity check matrices contain a large number of zeros, and hence may be expected to be relatively less complicated to implement.
Thus, the two goals of error correction code generation may generally be considered to be to generate a higher code rate while maintaining or reducing the complexity of implementing the code. In pursuit of these dual goals, several parity check codes have been developed, each of which have particular limitations. For example, π-rotation codes have been developed for a code rate of up to ½. One advantageous aspect of the π-rotation codes is their relative simplicity to implement in hardware due to the algebraic nature of the technique. Unfortunately, π-rotation codes have a code rate of ≦½, and thus are not relevant for many applications. Other techniques, such as Mutual Orthogonal Latin Rectangles (MOLR) and array codes can be used for rates above ½, but do not perform well for rates below ½. Finite geometry codes exist for high and low rates, but have a structure that is more complex to implement in hardware. Accordingly, it would be advantageous to be able to generate higher rate codes while not increasing the complexity of the codes excessively.
Additionally, different types of transmissions on the wireless network require different block sized transmissions. For example, data transmissions may involve the transmission of a 600-1500 byte block size whereas voice transmissions may involve the transmission of a 100 byte block size. Conventionally, the different block sizes were handled by different sized parity check matrices. This requires the creation and storage of multiple parity check matrices on the network elements for each expected block size.