This relates to transmission protocols and more particularly to transmission protocols in a packet transmission environment.
Advances in data transmission and switching over the last decade are promising deployment of communication systems with raw bandwidth and switching speeds that are an order of magnitude higher than the current systems. Optical fibers, for example, allow transmission of tens of gigabits/sec over several kilometers without repeaters. Switch fabrics that can switch bit-streams of more than hundreds of megabits/sec have already been prototyped. One such system has been described, for example, by A. Huang and S. Knauer in "Starlite: a Wideband Digital Switch," Proceedings of Globecom 84, December 1984, pp 125-127. However, the fruits of these efforts have not yet been realized in internetworking of diverse high speed networks, and have not yet been realized in delivery of high end-to-end bandwidth to applications within an operating system. Ideally, any single user connected to a packet network should be able to transmit at the peak bandwidth of the channel, once access is obtained. In practice, however, the obtainable end-to-end throughput is only a small fraction of the transmission bandwidth, particularly at high speeds. This throughput limitation comes from a variety of factors, including protocol processing in the networks layers, buffer congestion, and flow control mechanisms, as well as from various interfaces that transfer data from the network to a process in the host.
Reliable communication can be achieved with very little protocol processing if the network is perfect. It is clear that the limitations due to protocol processing come about because the protocol has to overcome network deficiencies such as bit errors large and varying packet delays in the network, packet loss due to congestion, out-of-order delivery of packets and overflow of buffers at various nodes in the network. Recently, there has been considerable interest in transport protocols for high speed networks. Broadly, higher speed can be achieved by a combination of three means. First, one can assume that the network has fewer deficiencies and therefore the protocol has to correct for fewer network problems. An example of this is the Packet Stream Protocol (PSP) which assumes a virtual circuit network and, therefore, packets are never received out of order. Second, higher speeds can be obtained by implementing some of the protocol processing in hardware. Third, one can invent new protocols that are better suited for high speed networks. The latter is the primary thrust of the instant invention.
To illustrate where the bottlenecks in present day transport protocols come from, it is instructive to consider a protocol such as the one based on "Go back N method" of error and flow control over a datagram network. This protocol is described, for example, by D. Comer in Internetworking with TCP/IP, Prentice Hall, 1988. Such a protocol may be in use, for example, over a network which has a large bandwidth--such as 1 Gbit/sec--and large latency--such as 60 msec (which is approximately the round trip delay for a terrestrial network between New York and San Francisco). In such a network, the states of the two communicating ends (i.e. Transmitter and Receiver) are usually out of synchronization due to the round trip propagation delay. Therefore, any change in the state of the transmitter or the receiver can be made known to the other only after a certain amount of delay. For a 1 Gbit/sec network, there can be 30 Mbits in transit from the transmitter to the receiver. Therefore, in the Go back N method of error control, if there is either a transmission error, a packet loss, long delay in delivery of a packet, or delivery of a packet out of sequence, 60 Mbits may have to be retransmitted. Similarly, if the buffer at the receiver overflows, the overflowed packets can be recovered only after 60 Mbits are retransmitted. This causes significant loss of throughput due to retransmissions. In addition, if some of the control messages from the receiver to the transmitter are received in error, elaborate error recovery mechanisms are necessary because the receiver has no way of knowing whether the transmitter has received the message correctly. Such difficulties are usually handled by timers which go off after a certain amount of waiting, determined by the round trip propagation delay. In most datagram networks this delay varies widely and is difficult to estimate. Thus, while such protocols were deemed reasonable two decades ago, when only low bandwidth was available at a considerable expense and the latency of the network was not large (due to small geographical size of most of the networks), their suitability for the future is questionable. This is due to loss of throughput and the large amount of protocol processing that results from using economized control messages that contain only changes in certain states. Moreover, the large number of control messages, and states, and the dependence on round trip propagation delay in some of the protocol processing make it difficult to parallelize the protocol processing.