ATM is a networking standard designed to provide simultaneous support for voice, video, and data traffic. An ATM network is packet-switched, but supports only one particular packet size—a 53-byte packet called a cell. Without regard to the type of information contained in a cell, each ATM cell must have a five-byte cell header and a 48-byte payload.
ATM is connection oriented. That is, two systems must set up an end-to-end “connection” over the network before they can communicate. But the connection does not require a dedicated circuit like a traditional telephone network connection; instead, the connection is merely a grant of permission to transmit cells at a negotiated data rate, with some guarantees as to quality-of-service (QoS) parameters such as minimum cell rate, average cell rate, and network delay. The term commonly used for an ATM connection is a Virtual Channel or “VC”.
ATM contains several service classes, each designed to meet the needs of particular types of information sources. The Constant Bit Rate (CBR) service class is most appropriate for sources having a known, constant transmission rate, such as traditional PCM-sampled telephone signals. The Variable Bit Rate (VBR) service class allows some variation in transmission rate but provides bandwidth guarantees, and is appropriate for digital video (e.g., MPEG-coded or H.26× video) and similar applications. The Available Bit Rate (ABR) service class is appropriate for most data transmission. ATM switches monitor their excess capacity (that part not being used by other service classes with guaranteed rates) and allocate that capacity to their ABR connections. Each ABR source is required, in return, to control its rate as directed by the switches in its connection path. Finally, the Unspecified Bit Rate (UBR) service class is also available for data transmission. UBR traffic has no guarantees as to cell loss rate or delay, but places few restraints on the behavior of sources.
ABR and UBR traffic can be regarded as “best-effort” traffic. That is, CBR and VBR traffic have precedence because of their QoS guarantees, and ATM switches work to schedule ABR and UBR traffic around their CBR and VBR traffic. In order to provide an incentive for best-effort traffic sources to utilize ABR connections, ATM switches attempt to divide their unreserved capacity fairly and efficiently between all competing ABR sources.
The “ERICA” and “ERICA+” switch congestion avoidance algorithms, as disclosed by R. Jain et al. in U.S. Pat. No. 5,805,577, represent a state-of-the-art approach to controlling ABR traffic. These algorithms measure switch utilization over “averaging intervals”, including making a count of the number of sources that utilized the switch during the interval. At the end of each such interval, an available ABR capacity for the next such interval is computed. Then, a “fair share” of the available ABR capacity is determined by dividing the capacity by the number of sources that were active over the preceding interval.
An overload factor is also calculated to represent the current overall switch load as a percentage. An explicit rate is then assigned to each source for use during the next measurement interval, based on its current rate, as:Explicit Rate=max(Fair Share, Current Rate/Overload Factor)
This explicit rate is communicated to its corresponding source.