When transmitting packets across a communication network, packets may be lost for a variety of reasons, for example due to noise on wireless links, queue overflow, cache misses, etc. Re-transmitting lost packets increases the delay in communication and to avoid this, additional redundant coded packets can be sent and a receiver can reconstruct lost packets from these—this is typically referred to as forward error correction (FEC).
The standard FEC approach is to partition a stream of information packets into disjoint blocks of size NR packets, where N is the block size and R the coding rate. Then, N(1−R) coded packets for a block are transmitted immediately after all information packets in the block have been transmitted and each coded packet can help the receiver reconstruct any of the information packets within the specified block (but not packets in any other block).
For example, consider a stream of information packets indexed k=0, 1, 2, . . . . This is partitioned into blocks of size NR, with block K containing information packets K, K+1, . . . , K+NR−1. Observe that each information packet is assigned to a single block. For block K, coded packets may be generated in many ways. One example is by random linear coding, as described below.
Importantly, a coded packet (i) contains information about every information packet in block K, and (ii) contains no information about packets in other blocks. This is illustrated in FIG. 1, where a conventional block code consisting of NR=4 information packets u1, u2, u3 and u4 is followed by two redundant coded packets e1 and e2. Thus, the block illustrated in FIG. 1 has a block size of N=6, and a coding rate of R=2/3.
The rationale for transmitting coded packets after all information packets in a block have been sent is twofold. Firstly, that causality requires that a coded packet can only protect information packets transmitted prior to the coded packet. Secondly, by transmitting coded packets after all information packets in a block then each coded packet can help protect al of the information packets in a block. This ensures that the redundant packets offer the required protection against expected packet loss and make maximum use of the available network throughput capacity. Indeed this conventional block coding approach is asymptotically throughput optimal, that is, it maximises use of available network capacity as the block size is made sufficiently large.
However, because coded packets are placed at the end of a block, recovery of lost packets cannot take place until all NR information packets have been transmitted and received, and so error correction comes at the cost of a delay which is roughly proportional to the block size NR. Typically information packets need to be delivered in-order to an application at the receiver. Hence, when a packet is lost, all subsequent information packets must be buffered at the receiver until the missing information packet can be reconstructed, and so these packets all suffer increased delay proportional to the block size NR.
Maximising use of available throughput capacity has traditionally been a primary design aim in communication networks, even at the cost of high delay, since throughput capacity has been the scarcest network resource. However, in modem networks excess network capacity is commonly available due to the prevalent approach to network quality of service management of over-provisioning. That is, network capacity is often no longer the scarcest network resource. Instead, delay is the primary performance bottleneck and achieving low delay is becoming a primary design driver even if achieving low delay comes at the cost of less efficient use of available network throughput capacity.
In view of the above, it is difficult to ensure low delay in-order delivery of packets across a lossy communication link. The conventional approach is to use block codes as described above, possibly with the addition of retransmission of lost packets to recover from decoding failure. Convolution codes (as widely used at the physical layer on wireless links) are a special case of this type of block code plus retransmit scheme. Due to the in-order delivery delay with such codes being roughly proportional to the block size NR, most of the work to date on reducing delay has focussed on working with smaller block sizes, while retaining the same code construction of locating coded packets at the end of the transmission block—see for example Subramanian & Leith, “On a Class of Optimal Rateless Codes”, Proc Allerton Conference 2008 and references therein. In the special case of bursty packet losses with a known upper limit on the number of packet losses, Martinian has proposed a low delay code construction but this is not systematic (information packets are never transmitted uncoded) and is confined to a specific loss channel, see Martinian & Sundberg, “Burst Erasure Correction Codes with Low Decoding Delay”, IEEE Trans Information Theory, 2004.
Accordingly, there is a need to address the above-described problems.