The present disclosure presents advancement in the state of the art for networks that carry packet data flows using reliable transport protocols with window-based flow control or TCP-friendly flow control such as described in “Promoting the Use of End-to-End Congestion Control in the Internet”, IEEE/ACM Transactions on Networking, No. 4, August 1999, pp. 458-472 by S. Floyd and K. Fall, fully incorporated herein by reference, TCP/IP being the most common example.
Currently, TCP is the most widely used transport protocol on packet-switched networks—something like 90% of all packet network traffic uses TCP, and there is a huge installed base of IP devices/appliances (personal computers, host computers, phones, video systems, PDA's, etc.) that use TCP. At least three issues with TCP are that: (1) an endpoint cannot request and receive a throughput/goodput guarantee; (2) in conventional networks, packets will be dropped, which causes TCP to reduce its sending rate; and (3) if multiple flows share a network link of a given bandwidth, a network user cannot specify a particular provisioning of the link bandwidth among the different flows because TCP automatically provisions an equal amount (a fair share) of the link bandwidth among all the flows.
Previous to this invention, no practical general method for per flow guaranteed throughput and goodput existed. The system and method for per flow guaranteed throughput and goodput according to the present invention enables a method for bandwidth, throughput, and/or goodput provisioning of multiple TCP flows across shared links. Furthermore, self-regulating protocols such as TCP use feedback acknowledgement signals for signaling congestion to the sender, for detecting dropped packets, and for controlling the sender's flow rate. Because this invention eliminates Layer 3 packet drops and contention between flows for link resources, it obviates the need for congestion signaling.
In effect, a TCP-based transport system becomes deterministic with respect to throughput, goodput, and reliability at the granularity of individual flows, and it becomes deterministic with respect to provisioning bandwidth, throughput, and goodput among multiple flows sharing common link resources. The deterministic throughput/goodput, zero packet drop behavior, and link bandwidth provisioning capability may have a strong impact on the design of systems and applications above the transport layer and the design and management of networks below the transport layer.
A comprehensive description of the issues with conventional TCP/IP that are solved by this invention, as well as the underlying theory and analysis of the invention, may be found in “TCP/SN: Transmission Control Protocol over Sequenced Networks”, by S. Moore.
Hereafter, the term “TCP” will be used to refer to any transport protocol using window-based flow control or TCP-friendly congestion control, such as rate-based congestion control, which has a direct mapping to window-based flow congestion control as disclosed in D. Loguinov, H. Radha, “End-to-End Rate-Based Congestion Control: Convergence Properties and Scalability Analysis”, IEEE/ACM Transactions on Networking, August 2003, Vol. 11, No. 4, pps. 564-577 and fully incorporated herein by reference, such as SCTP.
TCP is a complex protocol that was designed for reliable packet data transport over packet data networks, with particular focus on IP networks. For a general overview of TCP, see W. Stallings, “High-Speed Networks: TCP/IP and ATM Design Principles”, Prentice-Hall, Upper Saddle River, N.J., USA, 1998; J. Postel et al., “Transmission Control Protocol”, RFC 793, September 1981, http://www.ietf.org/rfc/rfc0793.txt; and R. Braden et al., Requirements for Internet Hosts—Communication Layers”, RFC 1122, October 1989, http://www.ietf.org/rfc/rfc1122.txt incorporated fully herein by reference. One of TCP's defining characteristics is its ability to self-regulate its flow rate in response to changing network conditions by using a mix of congestion control and congestion avoidance strategies as disclosed in M. Allman et al., “TCP Congestion Control”, RFC 2581, April 1999, http://www.ietf/org/rfc/rfc2581.txt incorporated herein by reference. The self-regulation capability allows a set of TCP flows to share network resources and bandwidth fairly, which is critical for autonomous network operation.
Of particular interest is the congestion avoidance strategy in which the signaling stemming from a dropped packet causes the TCP source to multiplicatively reduce its flow rate (e.g., by a factor of two) then additively increase the flow rate until another dropped packet is signaled or until the maximum flow rate is achieved. This type of congestion avoidance uses an Additive-Increase, Multiplicative-Decrease (AIMD) strategy. TCP's AIMD strategy admits “fairness” among multiple TCP flows competing for common network resources. For an additional discussion on this topic, see S. Floyd, “Connections with Multiple Congested Gateways in Packet-Switched Networks Part 1: One-way Traffic”, Computer Communication Review, Vol. 21 No. 2, April 1991; S. Bohacek, J. Hespanha, J. Lee, K. Obraczka, “Analysis of a TCP hybrid model”, Proc. of the 39th Annual Allerton Conference on Communication, Control, and Computing, October 2001; J. Hespanha et al., “Hybrid Modeling of TCP Congestion Control”, Proc. 4th Int. Workshop on Hybrid Systems: Computation and Control (HSCC 2001); and D. Loguinov, H. Radha, “End-to-End Rate-Based Congestion Control: Convergence Properties and Scalability Analysis”, IEEE/ACM Transactions on Networking, August 2003, Vol. 11, No. 4, pps. 564-577 incorporated herein by reference. In conventional IP networks carrying TCP traffic, an AIMD flow control strategy is critical for maintaining throughput stability. See V. Jacobson, M. Karel, “Congestion Avoidance and Control”, Proceedings of SIGCOMM '88, Stanford, Calif., August 1988, ACM; and S. Floyd, K. Fall, “Promoting the Use of End-to-End Congestion Control in the Internet”, IEEE/ACM Transactions on Networking, August 1999 incorporated herein by reference. The importance and broad applicability of the AIMD strategy is indicated by the recent result that AIMD is the only fair flow control strategy. See for example D. Loguinov, H. Radha, “End-to-End Rate-Based Congestion Control: Convergence Properties and Scalability Analysis”, IEEE/ACM Transactions on Networking, August 2003, Vol. 11, No. 4, pps. 564-577.
Three issues with TCP's flow control mechanism in current TCP/IP implementations are as follows:
An application using TCP as the transport mechanism has no ability to specify a flow rate or to control the packet drop rate—an application must accept the flow rate and packet drop behavior that TCP and the underlying network deliver. TCP has limited control over absolute throughput and packet drop rates—these are dependent on the traffic behavior and configuration of the underlying packet network. Without per flow guaranteed throughput, a system for link bandwidth provisioning among multiple TCP flows cannot be implemented in practice;
If the network also carries packet data traffic that does not follow TCP's congestion control and avoidance strategies, i.e., it is not TCP-friendly, then TCP flows may experience starvation, formally known as congestion collapse, in which flow rate is reduced to unacceptably low values. See S. Floyd, “Congestion Control Principles”, RFC 2914, September 2000, http://www.ietf/org/rfc/rfc2914.txt incorporated fully herein by reference. This susceptibility to starvation is exploited by processes that conduct throughput Denial-of-Service (DoS) attacks in which network links, switches, and/or routers are flooded with packet flows that do not use TCP-friendly congestion control mechanisms, such as UDP flows. Congestion collapse may also occur for some TCP flows in a pure TCP environment if those flows have relatively large round-trip times and/or if they experience multiple congested gateways. See S. Floyd, “Connections with Multiple Congested Gateways in Packet-Switched Networks Part 1: One-way Traffic”, Computer Communication Review, Vol. 21 No. 2, April 1991;
Conversely, if TCP flows share common resources (such as transmission links with ingress queues) with UDP flows, the flow control behavior of TCP may cause packet loss in the UDP flows. Because many UDP-based applications are sensitive to packet loss, TCP potentially degrades the quality and performance of the UDP-based applications. We refer to the network or flow state in which loss-sensitive applications experience unacceptable levels of packet loss as quality collapse.
For some mission-critical applications, the three issues noted above are unacceptable costs for TCP's transport-level reliability. A typical workaround is to build a separate network that is dedicated to supporting only the mission critical applications and that is highly over-provisioned in the sense that the network's traffic-bearing capacity is much larger than the average traffic volume. Another typical workaround for quality collapse is to use packet prioritization techniques, but such techniques have limited effectiveness, do not scale, and the behavior of network traffic subject to packet prioritization is difficult to analyze and predict.
Such workarounds have financial costs and still cannot provide deterministic throughput/goodput guarantees or packet drop behaviors. There are other schemes to mitigate the problem, such as allocating multiple TCP connections for a single flow or installing ACK proxy systems (see, e.g., R. Packer, “Method for Explicit Data Rate Control in a Packet Communication Environment without Data Rate Supervision”, U.S. patent application Publication No. 20020031088 incorporated fully herein by reference), but such approaches do not provide hard guarantees and either require changes to the TCP stack itself or require additional transport-layer (Layer 4) logic.
A solution which provides deterministic flow rates and packet drop behavior for individual flows without expensive overprovisioning may have a major impact on network design planning, operations, management, and administration, as well as networked application system design, e.g., the design of a storage networking system or grid computing system. Furthermore, a solution that does not require any changes to TCP or the transport layer would be compatible with the entire existing installed base of TCP-based networked applications.
When TCP flows are transported over a scheduled network (SN) using the methods of this invention, we will refer to the system as TCP/SN. Several classes of networked applications and services are either directly impacted by TCP/SN or even are created by TCP/SN:
VPN Class: Virtual Private Networks (VPNs) are usually implemented by some form of packet encapsulation. Examples include IPSec-based VPNs and MPLS-based VPNS. Such approaches logically isolate one VPN customer's packets from another VPN customer's packets, but they are not physically isolated within the network. Hence, networks supporting multiple VPNs need to be heavily engineered to minimize the impact of any one VPN's traffic on the performance of the other VPNs. If TCP/SN is used as the basis for a VPN service, then not only can encapsulation overhead and associated interoperability issues be avoided but also the traffic in each VPN will not affect the performance of any other VPN at all.
Reliable Multicast Class: It is widely believed that TCP cannot be used as the basis for a reliable multicast protocol, essentially because of limitations resulting from TCP's original design as a unicast protocol. The main problem is that each branch of the multicast tree experiences different congestion processes but the TCP source congestion control mechanism must respond in aggregate, i.e., if a packet drop occurs on branch A of the multicast tree but not on branch B, the TCP sender still needs to reduce throughput across branch B as well as branch A and resend the packet across both branches. When network congestion is at all significant, a TCP-based reliable multicast protocol will experience congestion collapse. Because TCP/SN guarantees no packet drops from packet congestion, the congestion collapse problem is obviated, and a TCP-based reliable multicast can be implemented, which also obviates the need to develop a new reliable protocol for multicast applications.
Throughput (D)DOS Immunity Class: TCP/SN is completely immune to the class of throughput Denial-of-Service (DoS) or Distributed Denial-of-Service (DDoS) attacks in which an attacker floods the network with packets. Packet flooding causes TCP flows to reduce their throughput to unacceptable low levels. Because a TCP/SN flow does not contend for network resources with any other traffic, it will not experience any throughput degradation during a throughput (D)DOS attack.
Storage Networking Class: To meet requirements for reliability, backup, and restoration of data, organizations implement data storage networks in which data files are stored and retrieved from remote storage devices through packet networks. Some of the associated applications and protocols, such as synchronous replication, distributed RAID, and iSCSI, could benefit directly from the use of TCP/SN.
Information Logistics Class: Information logistics is an emerging class of applications in which shared packet networks will be used to transport (large) blocks of information with requirements that the information transmission is initiated at precise times and finishes on or before a precise deadline (“right information, right time”). An example of an application in this class is grid computing. Because TCP is non-deterministic with respect to throughput in a shared network resource environment, only through extensive overprovisioning can one expect the requirements to be met. In contrast, with TCP/SN only the necessary amount of network resources need be allocated to each task to provide a hard guarantee that the requirement will be met.