ATM based broadband networks will support diverse applications ranging from voice and circuit emulation to variable bit rate video and data, each having diverse traffic characteristics. This diversity of traffic poses new challenges in terms of bandwidth allocation and traffic control within the network.
The resource (bandwidth) allocation problems in ATM networks are very different from the ones present in circuit switched or the existing packet switched networks. In existing telephone networks, for example, new connection requests are blocked on the basis of a shortage of trunks. In an ATM node this is not the case, since the physical resources are allocated virtually, and shared by many connections. In other words, all connections are statistically multiplexed onto the same link and yet expect the network to meet certain performance requirements for each connection. The problem of allocating appropriate bandwidth to each connection then becomes a crucial one. Routes are typically selected and connections accepted so as to "optimize" some measure of resources utilization while providing adequate QOS to the carried traffic. This requires knowledge of both the current traffic conditions and the impact of adding a new connection.
The resource allocation procedure hence forth referred to as the connection admission control (CAC) algorithm, uses the connection traffic descriptors (e.g., peak rate, mean rate also referred to as average rate or sustainable bit rate, and maximum burst size) along with the desired QOS parameters (e.g., cell loss, cell delay and cell delay variation) to access the amount of bandwidth required by the connection. The decision to accept or reject a connection is then based on the amount of available bandwidth on the outgoing link, in addition to other parameters which the network administrator may deem necessary to consider.
It should also be noted that although the connection admission control algorithm deals with the cell level performance of a connection, it impacts the call level performance as well via network dimensioning and routing. Both network dimensioning and routing need estimates of required bandwidth for typical connections to determine facility requirements and select appropriate routes respectively.
Generally speaking, in an ATM network a traffic contract is negotiated between the user and the network at each connection setup. The user supplies the traffic characteristics (descriptors), the desired QOS, and the destination address to the network controller. The ATM network controller would pass connection request on to the CAC algorithm. The CAC algorithm then determines whether there is enough free bandwidth available on each hop of the source-destination route to accept the connection.
A connection can belong to one of the four service categories defined by such industry-wide association bodies as ATM FORUM. These service categories are: (1) constant bit rate (CBR) service; (2) variable bit rate (VBR) service; (3) available bit rate (ABR) service; and (4) unspecified bit rate (UBR) service. ATM FORUM defines these services as follows:
The CBR service is for applications and services which have very stringent cell loss, delay and delay variation requirements. PA0 The VBR service is for applications and services which have less stringent cell loss, delay and delay variation requirements than the applications which use the CBR service. PA0 The ABR service is currently being defined by ATM FORUM. This service is meant primarily for data applications such as LAN-to-LAN interconnections. PA0 The UBR service is primarily for data applications. This service has no guaranteed quality of service associated with it. However, the QOS for the UBR service is engineered to meet certain (target) objectives. PA0 (a) .zeta.=ln(.delta.)/L=0 (small .delta. and infinite buffer size, i.e. L.fwdarw..infin.) produces the mean bandwidth r.sub.i for connection "i" which is given as ##EQU2## (b) .zeta.=ln(.delta.)/L=-.infin. (no buffer) produces the peak bandwidth .gamma..sub.i for connection "i". PA0 Because all connections are statistically multiplexed at the physical layer and the bit rate of connections varies, a challenging problem is to characterize, as a function of the desired grade of service, the effective bandwidth requirement of both individual connections and the aggregate bandwidth usage of connections multiplexed on a given link. This information is provided by accounting (on each link) for the amount of bandwidth currently allocated to accommodate existing connections, and by identifying how much additional bandwidth needs to be reserved on links over which a new connection is to be routed. Because of the statistical multiplexing of connections and shared buffering points in the network, both the accounting and reservation are based on some aggregate statistical measures matching the overall traffic demand rather than on physically dedicated bandwidth or buffer space per connection. In addition to the inherent complexity of such a matching, another major challenge is to provide these traffic control functions in real-time, upon the arrival of a connection request. The corresponding procedures must, therefore, be computationally simple enough so their overall complexity is consistent with real-time requirements. PA0 we propose a computationally simple approximation for the equivalent capacity or bandwidth requirement of a single or multiplexed connections on the basis of their statistical characteristics. When connections are statistically multiplexed, their aggregate statistical behaviour differs from their individual statistical representation. One needs, therefore, to define new metrics to represent the effective bandwidth requirement of an individual connection as well as the total effective bandwidth requirement of connections multiplexed on each link. The purpose of the equivalent capacity expression is to provide a unified metric to represent the effective bandwidth of a connection as well as the effective aggregated load on network links at any given time. These link metrics can then be used for efficient bandwidth management, routing, and call control procedures.
The exact problem of bandwidth allocation can be modeled as .SIGMA. G/D/1/K queuing model. However, the solution to the exact problem is too complicated to meet the real time requirements of a bandwidth allocation algorithm. Therefore suitable approximations must be made. One approximation model is the On/Off fluid flow process.
In "Effective bandwidths for the multi-type UAS channel" by R. J. Gibbens and P. J. Hunt in Queuing Systems (1991) pages 17-28, the uniform arrival and service (UAS) model is used to study traffic offered to a multi-service communication channel. As shown in FIG. 1, a plurality of sources i=(1, . . . , N) 10 are multiplexed at a multiplexer 12 to an outgoing link 14. The traffic from each source is assumed to be of the On/Off pattern in which the source generates cells at a constant rate .gamma. for a period of time t.sub.1 and is silent for a period of time t.sub.2. The multiplexer 12 has a buffer 16. The rate .gamma. is constant, but both t.sub.1 and t.sub.2 are random variables. The On and Off periods are usually assumed to be exponentially distributed. According to Gibbens and Hunt, the effective bandwidth of i-th CBR connection can be approximated as follows: ##EQU1## where T.sub.1i and T.sub.2i are the mean values of the On and Off periods respectively of i-th connection, i.e. T.sub.1i =&lt;t.sub.1i &gt; and T.sub.2i =&lt;t.sub.2i &gt;. .mu..sub.i =1/&lt;t.sub.1i &gt; and .lambda..sub.i =1/&lt;t.sub.2i &gt;. .zeta.=ln(.delta.)/L&lt;0. .delta. is the cell loss probability and L is the buffer size (expressed in terms of the number of cells it can hold).
In equation (1)
CACs according to known schemes use aggregates of either peak bandwidth or effective bandwidth, such as .SIGMA..gamma..sub.i or .SIGMA.r.sub.i, as the criterion for accepting or rejecting the requested call.
An article entitled "Equivalent Capacity and Its Application to Bandwidth Allocation; in High-Speed Networks" by R. Guerin, H. Ahmadi and M Naghshineh in IEEE Journal on Selected Areas in Communications Vol. 9, No. 7, Sep. 1991, pages 968-981, describes also in detail the CACs based on the fluid flow model and the stationary bit rate approach using Gaussian approximation. The article mentions that:
The article then reports that:
Guerin et al considered two approximations, fluid flow approximation and stationary approximation. The fluid flow approximation is substantially same as the one discussed by Gibbens and Hunt referenced above. This approximation produces the effective bandwidth for connection (i) as c.sub.i.
The stationary approximation results in an equation of ##EQU3## where m is the mean aggregate bit rate and .sigma. is the standard deviation of the aggregate bit rate and e is the buffer overflow probability.
Finally the article states that the equivalent capacity C is taken to be the minimum of ##EQU4## where N is the number of multiplexed connections.
Guerin et al uses this equivalent bandwidth as the criterion for accepting or rejecting the requested call.
As discussed above, constant bit rate connections are ideally characterized by periodic cell arrivals at the switch or multiplexer. However, in reality due to buffering in the customer premises equipment (CPE) and the upstream switches the traffic characteristics of the CBR connections change from a well behaved periodic (100% correlated) cell stream to a stochastic (or less correlated) cell stream with cell inter-arrival times distributed according to the delay experienced by the cell in the queues. This variation in the inter-arrival time causes the peak rate of the connection to momentarily increase or decrease around the application's peak rate, resulting in an increase in the buffer and bandwidth requirements. This increase in the buffer and bandwidth requirements is dependent on the absolute value and distribution of cell delay variation (CDV).
Therefore, the amount of bandwidth allocated to CBR connections is a function of: (1) peak rate of the connection; (2) the CDV value and distribution; (3) the available buffering; and (4) the QOS requirements of the connections. Hence an admission control scheme discussed above will not work in many cases where CDV and available buffering must be taken into consideration.
The known approximation schemes such as those discussed above have not addressed these effects and the invention improves the calculation of required bandwidth by taking into account of CDV cell delay variation and available buffering.
The applicant's copending application Ser. No. 08/709,455 filed Sep. 5, 1996 and incorporated herein describes a new CAC for CBR services in which CDV impact is considered. According to one embodiment of the invention described therein, when a new CBR connection request is received, the CAC process for CBR takes in the following inputs for the requested connection; (1) connection traffic parameters (peak rate, cell delay variation tolerance); (2) QOS values (cell loss ratio, cell delay and cell delay variation); (3) buffer size; (4) input link rate; and (5) output link rate. Once the inputs are received, the required bandwidth is calculated using algorithm described therein and is then compared against the available bandwidth to determine whether the connection can be accepted or not.
In the system which serves two different traffic, CBR and VBR services, CBR services are given higher priority over VBR services. Therefore, in the system shown in FIG. 2, two buffers 20 and 22 are provided for two types of traffic, high priority and low priority traffic which are multiplexed to an output link 24. When a CBR connection is requested, the required bandwidth for the requested connection is calculated as described above and the decision to accept or reject is performed by CAC. When a VBR connection is requested, CAC calculates the required VBR bandwidth for the requested connection. Therefore, the required VBR bandwidth must be modified according to the expected utilization of CBR traffic. In other words, the high priority buffer cannot be shared by the low priority traffic but the low priority buffer may be shared by the higher priority traffic.
The output link capacity shared by the two queues is denoted by ".kappa.". Let ".rho..sub.h " and ".rho..sub.l " be the utilization of the high and the low priority queues respectively. The congestion in a multiplexer can be attributed to two main phenomena; (1) cell level and (2) burst level. The cell level congestion occurs when the buffer in the queues fail to accommodate simultaneous cell arrivals from different sources, whereas, the burst level congestion occurs when the buffers fail to accommodate simultaneous burst arrivals from sources. If the amount of buffers available in the multiplexer is greater than N, the number of existing connections, then the cell level congestion would be eliminated. Since the available buffering in any ATM switch far exceeds the number of possible connections that can be supported, the following discussion limits to the problem of burst level congestion.