Due to limited networking resources in many of today's Internet Protocol (IP) networks, sporadic and periodically sustained congestion occurs. In addition, as the number of competing flows increases, the ability of Transmission Control Protocol (TCP) flows to share a bottleneck link fairly and efficiently decreases. High packet losses experienced by TCP flows also cause long and unpredictable delays as a result of TCP timeouts. Thus, most congestion control mechanisms strive to maintain high network utilization, avoid network overload, and thus avoid high queuing delays and packet loss.
IP provides a high degree of flexibility in building large and arbitrary complex networks. The ubiquitous, multi-service, connectionless, cross-platform nature of IP has contributed to the success of the Internet. A recent rise in usage and popularity of IP networks (e.g., the Internet) has been paralleled by a rise in user expectations regarding the quality of services offered by these networks. Unfortunately, due to limited networking resources (e.g., bandwidth, buffer space, etc.) in many networks, sporadic and periodically sustained congestion is imminent.
Consequently, service providers need to not only evolve their networks to higher speeds, but they also need to plan for new services and mechanisms to address varied requirements of different customers. At the same time, service providers would like to maximize sharing of costly network infrastructure by controlling the usage of network resources in accordance with service pricing and revenue potential. A rapidly rising bandwidth demand and a rising need for service quality have resulted in efforts to define mechanisms for efficient network control and service delivery.
A major part of the traffic transported in today's Internet is elastic traffic, particularly those from TCP applications. TCP flows are connection-oriented in nature and elastic in resource requirements. Elasticity stems from TCP's ability to utilize a network bottleneck, adapting quickly to changes in offered load or available bandwidth. However, TCP's ability to share a bottleneck link fairly and efficiently decreases as the number of flows increases. For example, performance of TCP becomes significantly degraded when the number of active TCP flows exceeds a network's bandwidth-delay product measured in packets. In this case, congestion occurs due to contention for limited network resources (e.g., bandwidth, buffer space, etc.). If this situation is not detected and prevented, congestion collapse may occur where a network is loaded to such a level that data goodput (e.g., which may be defined as throughput minus retransmissions) falls to almost zero.
A large number of flows may lead to high network utilization, but it is important to note that high network utilization is only good when packet loss rate is low. This is because high packet loss rates can negatively impact overall network and end-user performance. For example, a lost packet consumes network resources before it is dropped, thereby impacting efficiency in other parts of a network. A high packet loss rate also causes long and unpredictable delays as a result of TCP timeouts. It is therefore desirable to achieve high network utilization with low packet loss rates.
Although current TCP end-system control mechanisms may address network congestion, a TCP flow may still achieve near zero goodput when a large number of flows share a bottleneck link. Also, with a network heavily loaded with a large number of flows, current network-based control mechanisms may reach their limits of useful intervention to prevent excessive packet loss or even congestion collapse.
These reasons, among others, suggest the need for controlling the number of TCP flows in a network. Mechanisms for overload control in IP networks may involve per-connection TCP admission control. An admission control for TCP flows may be achieved without changing the end-system protocol stacks by either intercepting (e.g., dropping) TCP connection setup (SYN) packets in the network or sending artificial TCP connection reset (RST) packets to end systems. The RST based approach has the disadvantage of potentially faster application level retries, for example.
Admission control, in general, checks whether admitting a flow would reduce service quality of existing flows, or whether an incoming flow's quality of service (QoS) requirements can not be met. Admission control may play a crucial role in ensuring that a network meets a user's QoS requirements.
Overall network user utility may be increased by increasing network capacity (via switch, router and link capacities, for example), or by implementing an intelligent traffic management mechanism (e.g., admission control, active queue management (AQM)). Another option may involve over-provisioning a network so that under natural conditions the network is rarely overloaded. However, when there is a focused overload to part of a network (e.g., when a popular web site is heavily accessed, or some event not accounted for in traffic engineering happens), network devices (e.g., switches, routers) must have mechanisms to control resource usage. When such events happen, there are not enough resources available to give reasonable service to all users. Over-provisioning network bandwidth and keeping the network lightly loaded in order to support adequate service quality is not a cost-effective solution and cannot be achieved at all times.
In addition, a popular web-site may be flooded with web browser hits due to a promotional, sporting or other “news-breaking” event. Thus, users are left with either accepting a significant service degradation or the service provider has to increase investment on over-provisioning bandwidth. This gives rise to a key question of who would pay for increased capacity associated with over-provisioning bandwidth. As e-commerce becomes a highly competitive market, service providers who attempt to over-provision in the place of using admission control may not be cost-effective. The service providers who gain performance and reliability while keeping costs down are the ones who gain a competitive edge in the marketplace.
Current schemes propose use of various forms of on-line estimates of the number of active flows or the bandwidth of flows to make admission control decisions. Particularly, some current schemes use total transmission bandwidth, an estimated aggregate arrival rate and a buffer size to determine when the admission of a new flow is likely to cause a target overflow probability to be exceeded. However, these schemes do not provide a tight integration of AQM into TCP admission control thereby resulting in inefficient control of TCP flows. Incorporating AQM in TCP admission control provides the advantage of throttling the rate of a TCP connecting setup in addition to the ability to detect incipient congestion early and convey this information to the end systems. AQM schemes drop incoming packets in a random probabilistic manner where the probability is a function of recent buffer fill. An objective is to provide a more equitable distribution of packet loss, avoid the synchronization of flows, and at the same time improve the utilization of the network. The drop probabilities reflect actual network behavior and provide simple measurable and controllable quantities for admission control. The use of on-line measurements and estimation derived from AQM for admission control is appealing because of its simplicity and the ability to deal with sources that cannot be characterized. It allows admission control be built on top of AQM. The scheme does not require additional measurements or accounting beyond those needed for AQM.
In view of the foregoing, it would be desirable to provide a technique for implementing an admission control scheme for TCP flows which overcomes the above-described inadequacies and shortcomings of current schemes. More particularly, it would be desirable to provide a technique for implementing an admission control scheme for TCP flows that efficiently integrates AQM in a cost effective manner.