The present invention relates to a system and method for improved efficiency of data transmissions using multiple file streams. More specifically, the present invention relates to classifying bits of non-homogenous data into different streams according to the importance of the data, negotiating different quality of service guarantees for each stream according to the desired Bit Error Rate (BER), and transmitting the data using the multiple streams.
Many applications require reliable transmission of information. With general audio communications, noise or other distortion in the transmission results in a garble or distorted sound at the receiving end, but the data may still be intelligible. However, transmission of multimedia files such as audio-video transmissions, real-time transmissions of data and/or complex images require extremely reliable and timely delivery of data. Transmission errors may result in blurred images or unintelligible data.
Some applications require a reliable transfer of user bits across the network. Since transmission errors are unavoidable, such applications necessitate the correction of errors. Errors are due to missing packets or packets corrupted by transmission errors.
The transmission of bits is never perfectly reliable, and random errors occur even with the most carefully designed transmission link. The physical layer provides an unreliable bit transmission facility commonly called an unreliable bit pipe. The network nodes follow special procedures to transmit packets reliably over unreliable bit pipes. Procedures specify how the nodes can detect transmission errors and correct them by retransmission.
The current Internet standard relies on Internet Protocol (IP) forwarding, which uses a “best effort” standard for data transmission. Generally, the network attempts to deliver all traffic as soon as possible within the limits of its capacity, but without any guarantees relating to throughput, delay variation and packet loss. The IP layer communicates with packets called datagrams. Typically, datagrams have headers of 20-60 bytes and data payloads of up to 65K bytes. The IP protocol and the architecture of the Internet are based on the idea that datagrams, with source and destination addresses, can traverse a network of IP routers independently, that is, without the help of the sender or receiver.
Depending on network traffic, datagrams may travel completely different paths as routers along the way dynamically choose paths for the same IP to avoid loading down any one link. Thus, datagrams may be lost or may arrive out of order. The IP protocol can fragment datagrams (in routers) and reassemble them at the receiver. Generally, IP routers can discard IP datagrams en route without notice to the end users. IP relies on upper-level transport layers (such as TCP) to monitor datagrams and retransmit them as necessary.
Reliability mechanisms such as TCP assure data delivery, but do not ensure timely delivery or throughput. Thus, the IP layer uses a “best effort” service, which makes no guarantees about when data arrives or how much data it can deliver.
For traditional wired networks and for typical programs such as Internet browsers, email programs, file transfer protocols and the like, the IP limitation is acceptable because most applications running over the IP layer are low priority and low bandwidth data transmissions with high delay tolerance and delay variation. Furthermore, through retransmission, packet loss and bit errors can be reduced to insignificant levels.
As previously indicated, routers can discard datagrams without notice to the sender or receiver. In typical wired local area networks (LAN) and wide area networks (WAN) environments, such packet loss is accounted for using retransmission or other error coding. Other factors also contribute to the efficiency of data communications. For instance, noise introduces a fundamental limit on the rate at which communication channels can transmit bits reliably. Attenuation distortion may also occur due to deterioration of the strength of a signal as it propagates over a transmission line.
The bit error rate (BER) provides a measure of circuit or transmission quality. The BER is derived by dividing the number of bits received in error by the total number of bits transmitted during a predefined period of time. The BER provides an indication of end-to-end channel performance. The BER generally defines the capacity of a channel. If, for example, the channel capacity is equal to 30 Kilobytes per second (Kps), then it is possible to design a transmitter and receiver that transmits 29,999 bits per second (bps) over the channel with an error rate smaller than 10−9, i.e. with fewer than one bit out of 10 billion being incorrectly received. Such a channel cannot transmit reliably faster than 30,000 bps.
A good transmission link makes few errors. For instance the bit error rate of a typical optical fiber link is 10−12. Such a link corrupts one bit out of 10 trillion bits on average. Generally, if the transmission rate of the link is 155 megabits per second (Mps), then one incorrect bit arrives every 10−12÷(155×106) seconds.
Copper lines (wire pairs and coaxial cables) have larger bit error rates. A bit error rate of 10−7 is typical. A transmission link that sends packets of N bits each with a bit layer error rate equal to BER corrupts some fraction of the packets. That fraction is the packet error rate PER of the link. The packet error rate PER is the probability that N bits of one packet are not all received correctly and is equal PER=1−(1−BER)n.
Whereas the BER of a coaxial cable or optical fiber link can be made very small, the situation is very different in wireless links. In wireless communication links, the BER can fluctuate widely over time (from between 10−1 to 10−5). The wireless receiver may find itself in a region where the power of the received signal is to low to recover the bits successfully. Physical obstructions, weather patterns, signal interference, attenuation, and numerous other factors contribute to the BER of a wireless transmission.
To account for data transmission errors, networks use two types of error control mechanisms: error correction and error detection with retransmission. When using error correction, the packets contain enough redundant information called an error correction code for the receiver to be able to correct the packets and to reconstruct the missing packets. CD players use such error correction codes to correct errors caused by dust or scratches on the compact disc.
When error detection is used each packet contains additional bits called error detection code that enables the receiver to detect that transmission errors corrupted the packet. When the receiver gets an incorrect packet, it arranges for the sender to send another copy of the same packet. The sender and receiver follows specific rules called retransmission protocols which supervise these transmissions.
For specific applications, such as multimedia, stored video, and other error-sensitive transmissions, various encoding techniques coupled with error correction alternatives have been employed. Typically, data to be transmitted are first gathered into a block of characters. An algorithm is applied to the block to generate one or more checksum characters that are appended to the block for transmission. The receiving device performs the same algorithms on the block it receives. The locally generated checksum is compared to the transmitted checksum. If the locally generated and transmitted checksums are equal, the data is assumed to have been received error free. Otherwise, the data block is assumed to have one or more bit errors and the receiving device then request the transmitting device to retransmit the data block.
Most compression algorithms are data content independent. Specifically, the compression algorithm or the data compression is actually performed at the data link level, rather than the presentation or application level. The problem associated with the data independent compression methods is that it is desirable to unambiguously identify various data types. Various data types may be interspersed or partially compressed, making data type recognition difficult. Another problem is that given a known data type, or mix of data types within a specific set or subset of input data, it may be difficult to predict which data encoding technique yields the highest compression ratios.
In real-time multimedia transmissions, retransmission introduces unwanted latency into the data transmission. Additionally, redundancy wastes precious bandwidth and requires more processing on both ends of the transmission. While increasing bandwidth to a level that eliminates packet delays is possible, such bandwidth increases do not account for temporary overloads and congestion cannot be avoided no matter how much bandwidth is available.
Modern communication systems are increasingly required to provide Quality of Service (QoS) negotiation capabilities. QoS means providing consistent and predictable data delivery. QoS is a continuum, defined by the network performance characteristics that are most important to users and their particular service level agreements. QoS mechanisms must work with wireline networking considerations (i.e. different line qualities, bandwidth considerations, traffic volume, and the like) as well as considerations particular to the wireless environment (i.e. signal degradation, shadowing, signal cancellation, and the like). The inherent BER of wireless transmissions require that error detection and correction be performed efficiently, even when efficient bandwidth allocation is difficult.
Generally, certain bits within any data stream are more important than others. With respect to image or video data, there is typically unused white space or lines within the frames which are identical or so similar that loss of such data in a lossy compression algorithm would be unnoticeable at the receiving end. Conversely, loss of the important bits within the data stream would result in a fuzzy image, garbled sound, or an unreadable file at the receiving end.
Compression is possible because the source output contains redundant or barely perceptible information. Redundant information is data that does not add information, such as white lines between lines of text on a page, periods of silence in a telephone signal, white space or identically shaded pixels in a picture, and similar lines in frames in video sequence. Redundancy can be reduced by algorithms that then achieve a near lossless compression, even though they reduce the number of bits in the source output. Barely perceptible information is, as the name indicates, information that does not affect the way we perceive the source output. Examples of barely perceptible information in audio communications include sounds at frequencies that the human ear does not hear or sounds that are masked by even louder sounds. In images, some fine details are difficult to perceive and can be eliminated without much picture degradation. Such barely perceptible information is reduced or eliminated by lossy compression algorithms.
The transmission of information from one computer to another typically uses one compression algorithm for the entire transmission, such that lossy compression algorithms will impact important bits as well as the less important bits. Thus, processing power is wasted in error checking unimportant bits.
Many data streams, such as compressed video and voice, contain elements with different requirements with respect to bit error rate. In compressed video, corruption of some elements will disrupt the entire picture, while corruption of other elements will disrupt only a small portion of the frame.
State of the art compression techniques account for this difference by applying an error correction code at the application level to provide greater protection for the more valuable elements of the data stream. This technique, however, is inherently inefficient use of bandwidth because error correction applied at the application level is of necessity a hard decision code. The application layer cannot take advantage of the soft decision capability available only to the lower layers of the communication stack. Soft decisions are known to provide 2 dB to 5 dB improvement in signal quality and jamming immunity.