1. Field of the Invention
The present invention relates generally to the electrical arts and methods of the electrical arts. In particular, the disclosure relates to packet recovery and a packet recovery method for real-time (live) multi-media communication over packet-switched networks such as the Internet.
2. Problems Identified by the Inventors
Packet loss in packet-switched networks, like the Internet, results from network congestion, switch contention and the consequent buffer overflow, and from noise and interference on the links that comprise the network. Moreover, packets are not necessarily received in the order in which they were transmitted. The probability of packet loss depends on the capacity of the network actually being used. In addition to packet loss, these network characteristics result in unpredictable latency and jitter, all of which are undesirable for real-time multi-media communication.
Fiber optic networks that span the continental United States, and that interconnect continents via transoceanic cables, have very high network capacity and very low noise and interference over their individual links. In such long-haul networks, most packet loss results from electronic switching and buffers, rather than from optical transmission. Retransmission requests and retransmissions can cause buffer overflow and can exacerbate the problem of packet loss. For real-time multi-media communication, packet loss results in poor video/audio quality.
Wireless networks are characterized by low data rates and high error rates. In wireless networks, packet loss results from network congestion and interference due to physical objects and competing transmitters. Bursts of corrupt or lost packets are quite common in wireless networks. For multi-media communication, packet loss results in poor video/audio quality and also increased power consumption for the wireless devices.
Digital video involves encoding moving pictures into a digital signal, transmitting the digital signal, and decoding the digital signal into a viewable format on a television screen, computer monitor or cell phone. To use the network bandwidth efficiently, it is necessary to compress video frames because, otherwise, a very large number of digital bits would have to be transmitted.
The Motion Picture Experts Group (MPEG) has developed the MPEG-2 compression standard for digital video, which addresses the format of a compressed bit stream and the combining of video, audio and data into single or multiple streams for transmission or storage. MPEG-2 uses both spatial redundancy (redundancy within a single video frame) and temporal redundancy (redundancy between consecutive frames). MPEG-4 is a variant of MPEG for low-bandwidth video telephony, such as wireless networks.
The Joint Photographic Experts Group (JPEG) has developed the JPEG 2000 compression standard, which can be used for both still and moving pictures. JPEG 2000 uses only spatial redundancy, without any dependency between consecutive frames. JPEG 2000 (J2K) is the leading digital film standard currently supported by the Digital Cinema Initiative (a consortium of major studios and vendors) for the storage, distribution and exhibition of motion pictures. JPEG 2000 does not employ temporal or inter-frame compression; instead, each frame is encoded as an independent entity using either a lossy or lossless variant of JPEG 2000. The lossless variant of JPEG 2000 runs slightly slower and has lower compression ratios.
For efficiency, both MPEG and JPEG use compression of video frames. The most efficient kind of compression, variable bit rate compression, can result in large bursts of data and, thus, contribute further to network congestion, switch contention and buffer overflow and, thus, unpredictable packet loss, latency and jitter.
With both MPEG and JPEG compression, the effect of unreliable network communication is exacerbated, because lost or erroneous packets make it difficult (or impossible) to decompress a transmitted video image and reconstruct it at the receiver. Thus, for video communication, there is a need to mitigate the unreliability of the communication network and to improve the quality of the delivered image.
In addition to low packet loss rates, real-time multi-media communication, particularly video and audio, require low latency (typically, the response time of a human) and low jitter so that the quality of service is acceptable to the end user.
Feedback-based error correction, such as automatic retransmission request (ARQ), which is based on negative acknowledgements (NACKs) and retransmissions, introduces significant latency and jitter into packet transmissions. These characteristics make ARQ unacceptable for real-time multi-media communication.
For long-haul networks, forward error correction (FEC) can be used to recover missing packets while significantly reducing the latency and jitter, because the receiver does not need to transmit retransmission requests to the transmitter and to receive retransmissions from the transmitter. With an appropriate FEC hardware implementation, the FEC encoding and decoding times are small compared to the communication time.
In coding theory, an error is a corrupted symbol (or bit) with an unknown value in an unknown location, whereas an erasure is a corrupted symbol (or bit) with an unknown value in a known location. A forward error correction (FEC) code has R parity (checksum) symbols (or bits) added to K data symbols (or bits) to form an N=K+R codeword. The R parity symbols are added in such a way to allow a number of errors and erasures to be corrected. FEC codes are characterized by their algorithms for encoding and decoding the data symbols. The redundancy (overhead) of a FEC code is the ratio of parity symbols to data symbols, i.e., R/K. Increasing the redundancy increases the ability to correct errors and erasures, and the patterns of errors and erasures that can be corrected. The cost of increased redundancy is increased latency, due to the increased computation and communication cost. By replacing an error with an erasure, the error correcting power of the code is approximately doubled (R. J. McEliece, The Theory of Information and Coding, Addison Wesley, 2004).
The present invention addresses erasures (packet losses) not only because the error correction power of the code is increased, but also because the lower layers of the Internet protocol stack, specifically the User Datagram Protocol (UDP), over which the FEC algorithm runs, drops packets that contain errors.
As applied to the recovery of missing packets, at the transmitter, an FEC algorithm adds some number of redundant packets, called parity (or checksum) packets, to a block of multi-media data packets to protect those packets from packet loss or corruption. The parity packets are transmitted along with the multi-media data packets to the receiver. At the receiver corrupted packets are detected and discarded and, thus, are not delivered to the receiver. At the receiver, the FEC algorithm can recover one or more missing multi-media data packets in the block by combining the parity packets with the multi-media data packets that were received to reconstruct the missing data packets from the block.
The number of missing data packets in a block that an FEC algorithm can recover is limited to the number of parity packets for the block, i.e., each parity packet can recover one missing data packet. The number of parity packets, and the number and choice of data packets used to compute a parity packet, results in a tradeoff between overhead, latency and recoverability of lost packets. The actual packet loss rate and network bandwidth, and the desired recoverability and latency, must be considered in determining the number of parity packets and their construction.
For coding efficiency, it is better to have a large block, because more protection can be provided for the same redundancy, because there are more parity packets and each parity packet covers more data packets. Large blocks that encompass many data packets and that require hundreds of milliseconds or seconds to transmit also help to overcome packet losses that occur in bursts.
Variable packet loss rates, variable transmission rates, and variable compression rates, coupled with the need to minimize latency, present challenges to the design and implementation of forward error correction for packet-based real-time multi-media communication.
For variable compression rates, a transmitter implementing forward error correction must wait an indeterminate amount of time for enough multi-media data packets to accumulate to fill the block of multi-media data packets before it can generate parity packets. To smooth out the jitter in the data stream, output buffering must be used at the transmitter. Such output buffering contributes additional latency to the stream.
For variable transmission rates, a receiver implementing forward error correction might wait an indeterminate amount of time for all parity packets for a data block to arrive before it can recover the missing packets, resulting in increased latency and jitter at the receiver. To smooth out the jitter in the multi-media data stream, input buffering must be used at the receiver. Such input buffering contributes additional latency to the stream. Forward error correction (FEC) codes and algorithms have been proposed for the correction of errors and erasures (losses) of bits and symbols (packets) in the prior art.
3. Description of the Prior Art
Forward error correction (FEC) codes and algorithms have been proposed for the correction of errors and erasures (losses) of bits and symbols (packets) in the prior art.
Specific forward error correction codes in the prior art include block codes, convolutional codes, low density parity check codes, turbo codes, fire codes, Hamming codes and Reed Solomon codes. Each of these codes employs a different algorithm for generating the parity bits/symbols and a different algorithm for recovering the original data bits/symbols. Books describing such prior art include (R. G. Gallager, Information Theory and Reliable Communication, John Wiley & Sons, Inc., 1968); (S. Lin and D. J. Costello, Jr., Error Control Coding: Fundamentals and Applications, Prentice-Hall, Inc., 1983); (P. Sweeney, Error Control Coding, John Wiley & Sons, Inc., 2002); and (R. J. McEliece, The Theory of Information and Coding, Cambridge University Press, Student Edition, 2004).
Forward error correction techniques to support real-time multi-media communication on the Internet and over wireless networks have been considered in the prior art (J. Rosenberg and H. Schulzrinne, IETF RFC 2733, An RTP payload format for generic forward error correction, Internet draft, February 1999, http://info.internet.isi.edu:80/in-drafts/files/draft-ietf-avt-fec-05.txt-), (Society of Motion Picture and Television Engineers, SMPTE 2002-2-2007, Forward Error Correction for Real-Time Video/Audio Transport Over IP Networks, http://store.smpte.org/product-p/smpte %20202-1-2007.htm), and (Society of Motion Picture and Television Engineers, SMPTE 2002-2-2007, Unidirectional transport of constant bit rate MPEG-2 transport streams on IP networks, http://store.smpte.org/product-p/smpte %20202-2-2007.htm). The ProMPEG Code of Practice #3 defined in the two latter documents are specified only for MPEG-2 video compression, and works only for constant bit rate, despite the fact that variable bit rate is more efficient. The FEC methods defined in both of these documents compute row (horizontal) and column (vertical) parity packets on a block of data packets using the XOR operation. They use a vertical parity packet to recover a single missing data packet in a column and a row parity packet to recover two missing data packets in a column.
U.S. Pat. No. 3,387,261 by Betz describes an apparatus for detecting and correcting errors in data particularly when the data are being transferred from a main memory to a secondary storage device such as a magnetic tape. The apparatus uses the XOR operation to compute the parity bits, and includes a pair of transfer registers for storing the parity bits and the data bits. Before a frame is transferred, the apparatus generates a parity bit for that frame (referred to as a short check). To handle errors in corresponding bit positions of successive frames (referred to as a burst error), the apparatus generates a parity bit for the data in those bit positions (referred to as a long check). The invention also describes computing a parity bit by referencing data in a time-oriented manner (which is referred to as a diagonal check), but concludes that doing so offers little more than a short check. An object of the invention is to perform error detection and correction in a continuous and parallel manner by means of short and diagonal parity bits, where diagonal means computing a parity bit using bit 1 of frame 1, bit 2 of frame 2, bit 3 of frame 3, i.e., along a diagonal with a −45 degree slope.
U.S. Pat. No. 4,435,807 by Scott and Goetschel describes an error correction system that computes error correction parity bits using the XOR operation for rows and diagonals at a 45 degree angle in a V-shaped pattern. The system uses a first error correction circuit that corrects single errors in the data, error detection bits and error corrections bits, and a second error correction circuit that receives the data from the first error correction circuit after the single errors have been corrected and corrects the residual multiple errors. The system aims to correct errors, rather than erasures which the present invention aims to correct.
U.S. Pat. No. 4,599,722 by Mortimer describes an apparatus for encoding and decoding digital data to permit correction of a single bit error in a sequence of data packets (bytes) using Galois field arithmetic to form the parity packet (byte). U.S. Pat. No. 4,769,818 also by Mortimer describes a method and apparatus for coding digital data to permit correction of one or two incorrect data packets (bytes). Each data byte comprises data bits and one parity check bit. To encode the sequence of data bytes, two code bytes are determined from the data bytes. The data bytes and the code bytes form an encoded data block. The data blocks are grouped into bundles of data blocks. A bundle is a two-dimensional array of bytes, where the horizontal rows and the vertical columns of the array are encoded separately. The invention is intended to correct bit errors in bytes, rather than to recover lost packets, and does not use diagonals like the present invention.
U.S. Pat. No. 4,796,260 describes the Schilling Manela forward error correction and detection method and apparatus. Their method uses two or more lines (diagonals) of data symbols (or bits) with different slopes and calculates parity symbols for each of those lines. The slopes may be all positive or all negative or some combination thereof. For a line of data symbols having p-bits per symbol, the method uses mod 2.sup.p arithmetic to form the parity symbol. The algorithm employs a composite error graph and a threshold. For each data symbol, the composite error graph keeps track of the number of errors in the lines that contain that symbol. The algorithm starts with the symbol that has the largest number of errors beyondthe threshold. It chooses a new data symbol to minimize that number, and substitutes that data symbol into the sequence of data symbols. Thus, the algorithm of Schilling and Manela differs from the present algorithm, which starts with a row, column or diagonal that has the smallest number of missing packets, i.e., a single missing packet and recovers that packet. U.S. Pat. No. 4,849,976 by Schilling and Manela builds on the above invention by appending the parity rows to the bottom of the data block. It calculates the first set of parity symbols (bits) on the data symbols (bits) in the first line (diagonal), the second set of parity symbols on both the data symbols (bits) in the second line and the parity symbols (bit) in the first line, and so on.
U.S. Pat. No. 6,012,159 by Fischer and Paleologou describes a method and system for error-free data transfer in satellite broadcast networks and other networks. Their invention is based on Galois fields and Vandermonde matrices and use matrix inversion and multiplication, which are more expensive operations than the XOR operation used in the present invention.
U.S. Pat. No. 6,079,042 by Vaman, Chakravarthy and Hong applies to ATM and other networks where, in one embodiment, the ATM adaptation layer selectively implements error recovery, depending on the type of data transmitted, e.g., video or audio. The method segments the data into blocks of data packets (cells) of a pre-determined size, D×L and calculates vertical and diagonal parity packets (cells) for the block. The diagonal in their patent is quite different from the diagonal in the present invention. In their invention, there are L vertical parity packets and D+L diagonal parity packets. A vertical parity packet is formed by using modulo 2 addition of the data packets in a column of the block. The D+L diagonal parity packets are formed by XORing varying numbers of data packets after skewing the block.
U.S. Pat. No. 6,948,109 by Coe uses pseudo-diagonals for low density parity check (LDPC) forward error correction. The invention extends a portion of the original LDPC matrix such that the LDPC code becomes a periodic repeating code. The feature of Coe's method is that data packets are processed incrementally as they are received, rather than periodically as a block when the entire block is received as in the present invention. The disadvantage of processing the data packets incrementally is there is a difference in the amount of time required for the best case and the worst case, which introduces jitter.
U.S. Pat. No. 7,197,685 by Limberg uses Reed Solomon forward error correction for transmitted digital television signals and MPEG-2 data packets. Reed Solomon codes incur relatively high computational costs and are difficult to use at the high data rates used for High Definition Television signals. The present invention achieves almost the same recoverability for the same number of parity packets as Reed Solomon codes without incurring the high computational cost.
U.S. patent application 20060029065 by Fellman describes a system and method for low-latency content-sensitive forward error correction for MPEG-encoded video and audio streams, with more protection for MPEG I-Frames and audio frames. The method is based on Galois field arithmetic, which is computationally more expensive than the XOR operation used in the present invention. Moreover, it uses only rows and column and, thus, can handle only relatively short bursts of erasures, unlike the present invention which also uses diagonals and, thus, can handle longer bursts of erasures.
U.S. patent application 20060156198 by Boyce and Zheng presents the inventors' Complete User Datagram Protocol (CUDP) for multi-media traffic on wireless packet networks that identifies corrupted frames and uses forward error correction. UDP discards packets with corrupted headers and/or corrupted payload. UDP Lite discards only packets with corrupted headers. CUDP corrects UDP erasures and UDP Lite erasures and errors. The patent does not define a new FEC code or algorithm but uses an existing maximum distance separable packet code, such as the Reed Solomon code, that uses vertical (column) packet coding, long vertical packet coding in which several columns participate in a parity check, or horizontal (row) and vertical (column) packet coding.
The article (S. M. Reddy and J. P. Robinson, Random error and burst correction by iterated codes, IEEE Transactions on Information Theory, IT-18(2), pp. 182-185, January 1972) describes a method for burst and random error correction by iterated codes that uses row and column parity checks, where the columns are decoded before the rows and information in the form of weights is conveyed to the next step of the row decoding. The decoding consists of erasing the positions in the word with the smallest weight and then the next largest weight, etc. With computationally inexpensive distance 1 codes, this strategy can correct bursts of length L for a D×L block. The present invention can correct a larger class of patterns of simultaneous burst and random packet loss (erasures) and can correct a burst of length less than or equal to 2×L-S erasures.
The article (P. G. Farrell and S. J. Hopkins, Burst error-correcting array codes, The Radio and Electronic Engineer Journal, vol. 52, no. 4, pp. 182-192, April 1982) describes array codes for correcting bursts that include both errors and erasures. Their method uses column and −45 degree (slope −1) diagonal parity checks with row-by-row transmission of data packets or, alternatively, column and row parity checks with −45 degree diagonal transmission of data packets. It does not use row, column and diagonal parity checks like the present invention does.
The article (M. Blaum, P. G. Farrell and H. C. A. Van Tilborg, A class of burst error-correcting array codes, IEEE Transactions on Information Theory, vol. IT-32, no. 6, November 1986) notes that an array code in which the last row and the last column contain redundant bits can correct any single error and demonstrates that, if the bits are read (transmitted) diagonally instead of horizontally, the code can correct bursts of errors of length up to L if and only if D≧2×(L−1). The present invention corrects only erasures (missing packets), but can correct as many as 2×L+D−3 erasures with much smaller values of D.
The paper (A. J. McAuley, Reliable broadband communication using a burst erasure correcting code, Proceedings of ACM SIGCOMM, pp. 297-306, September 1990) describes a burst erasure correcting code for loss due to congestion in broadband networks that uses a simplified Reed Solomon code as a complement to automatic retransmission requests. Reed Solomon codes incur relatively high computational costs, and are difficult to use at the high data rates used for High Definition Television signals.
The paper (N. Shacham and P. McKenney, Packet recovery in high-speed networks using coding and buffer management, Proceedings of IEEE INFOCOM, pp. 124-131, 1990) describes a forward error correction method for packet recovery in high-speed networks. The data packets are arranged in a two-dimensional array, and a parity packet is added to each row and column using the XOR operation on the bits in the packets in the respective row or column. The parity packets are computed as follows. The first bit of a parity packet is the XOR of the first bit of the data field of all packets comprising the block. The second bit of the parity packet is the XOR of the second bit of the data field of all packets comprising the block, etc. If the packets are transmitted by rows, the column parity packets are used to recover a burst of missing packets of any length less than or equal to the number of columns, and the row parity packets are used to recover additional missing packets scattered across the array. The present invention uses diagonals in addition to rows and columns and, thus, is able to recover longer burst erasures and more random erasures than the scheme of Shacham and McKenney.
The article (L. Rizzo, Effective erasure codes for reliable computer communication protocols, Computer Communication Review, vol. 27, no. 2, pp. 24-36, April 1997) presents a general overview of erasure codes for forward error correction. In particular, it discusses the use of Vandermonde matrices computed over the Galois field GF(pr) for erasure codes. Such codes require matrix multiplication and inversion, which are more costly operations than the simple XOR operation.
The article (Y. Wang, S. Wenger, J. Wen and A. K. Katsaggelos, Error resilient video coding techniques, IEEE Signal Processing Magazine, vol. 17, no. 4, pp. 61-82, July 2000) discusses error-resilient video coding techniques for real-time video communication over unreliable networks. In particular, it reviews the state-of-the-art for the H.323 and MPEG-4 standards. It does not introduce or discuss any particular FEC code or algorithm.
It will be evident to those skilled in the art that selected embodiments of the present invention address one or more of the shortcomings evident in these references.