1. Field of the Invention
The present invention relates broadly to the field of telecommunications. More particularly, the present invention relates to the acceptance or rejection of a proposed connection through an asynchronous transfer mode (ATM) switch or node based on available bandwidth and the precedence assigned to the proposed connection.
2. State of the Art
Perhaps the most awaited, and now fastest growing technology in the field of telecommunications in the 1990's is known as Asynchronous Transfer Mode (ATM) technology. ATM is providing a mechanism for removing performance limitations of local area networks (LANs) and wide area networks (WANs) and providing data transfers at a speed of on the order of gigabits/second. The variable length packets of LAN and WAN data are being replaced with ATM cells which are relatively short, fixed length packets. Because ATM cells can carry voice, video and data across a single backbone network, the ATM technology provides a unitary mechanism for high speed end-to-end telecommunications traffic.
Because the data contained in the ATM cells can be generated from either generally fixed rate communications, or bursty type communications, it will be appreciated that traffic accommodation mechanisms have been introduced in order to avoid situations where ATM switches or nodes are over-taxed, resulting in loss of cells. In particular, buffering and leaky-bucket usage regulating mechanisms are well known. In addition, it is known in the art that ATM switches and nodes will first determine whether they have the capacity to handle a proposed call before accepting the call. This is achieved through connection admission control (CAC).
An important part of CAC is the signaling between the user and the network for call establishment. The user network interface (UNI) is the interface between the user and the network. There are five possible states for the UNI (null, call initiated, outgoing call proceeding, active, and release) and four signalling messages (SETUP, CALL PROCEEDING, CONNECT, and RELEASE requests). The CAC performs a network defined algorithm to determine whether the call request can be accepted while maintaining the guaranteed quality of service (QoS) of each currently existing connection. This is determined based upon the service class and traffic descriptors in information fields of the SETUP message. If a call request from the user is acceptable to the network, then a CALL PROCEEDING message is sent to the user. CONNECT messages are then exchanged and the connection becomes active for data cells to flow on the newly established connection from the user to the network. If the network cannot meet the QoS of the existing connections with the addition of the new connection, then the call request is rejected and a RELEASE message is returned to the user.
Referring to prior art FIG. 1(a), all of the call control signaling messages are provided in Q.2931 signaling format, with bytes 1 through 9 being standard, and bytes 10 and higher being message dependent information elements (IE). The signaling format for an IE is shown in FIG. 1(b). The SETUP message includes a number of IEs which are mandatory (user cell rate, called party number, connection identifier, QoS parameters, and broadband bearer capability), and may include optional IEs. Current optional IEs for the SETUP message include AAL parameter, calling party number, end point reference, broadband higher layer information, and broadband lower layer information. The bytes of the mandatory messages and optional IEs permit the CAC of an ATM switch to make an accept or reject decision.
Prior art FIG. 2 shows the architecture of an ATM switch 10. Input modules 12, 14 extract the ATM cell stream, perform usage parameter control, check for cell errors, and pass the acceptable cells to the switch fabric 16. The switch fabric 16 switches the cells from the input modules 12, 14 to the proper output modules 18, 20 based upon their VPI/VCI value. Signaling cells are identified by their VPI/VCI value and switched from the switch fabric 16 to the CAC 22 for processing. The output modules 18, 20 perform the opposite function of the input modules in order to prepare the switched cells into streams of cells for transmission.
Once a call is accepted, ATM requires that the QoS agreed to in the traffic contract between the source and network be guaranteed. To this end, ATM employs preventive and reactive traffic control methods through CAC, performed in a separate module in an ATM switch, and usage parameter control (UPC), performed within the input modules to prevent calls from being accepted if the QoS cannot be guaranteed. CAC generally performs the following functions: (1) negotiating new connection requests with a user and establishing a traffic contract characterizing source traffic and QoS, (2) deciding on admission or rejection of the new connection according to the network policy, (3) allocating network resources so that the network efficiency is maximized with the addition of a new connection, (4) providing acceptable values to the UPC, and (5) releasing network resources when a virtual circuit is disconnected.
Referring to prior art FIG. 3, the CAC block receives a SETUP message at 30, and based upon the mandatory and optional IEs, reads the necessary resources for a requested virtual circuit connection (VCC) and runs a bandwidth allocation algorithm at 32 to determine at 34 whether the necessary resources are available at the switch. If the bandwidth allocation algorithm determines that the required resources are available, the CAC updates at 36 the allocation database with the new VCC and allocated resources, a traffic contract is agreed to at 38, and the VCC is passed at 40 to the user in a CONNECT message. If the algorithm determines that the required resources are not available, the CAC rejects the call at 42. The rejection is communicated through a RELEASE message. A number of different bandwidth allocation algorithms have been used for CAC, with the goal of each algorithm being to maximize the admission region and statistical gain of one or more particular types of ATM traffic over a switch without exceeding the bandwidth of the switch.
In addition, several algorithms have been proposed in which the ATM network assigns one of two precedence levels to cells traversing a VCC. For example, in 1992, Nippon Telephone and Telegraph proposed a buffer reservation scheme in which the cells of a call are assigned to one of two levels of precedence (Saito, H., "Hybrid Connection Admission Control in ATM Networks", SuperComm/ICC, 1992). A precedence would be assigned to a call using the cell loss priority (CLP) bit. The amount of buffer space reserved for high precedence cells is dynamically adjusted according to the required bandwidth of the high precedence cells. Low precedence cells would have available the remaining buffer space, in the dynamically allocated buffer. However, it is only once the network has determined that it can accept a requested VCC, that precedence is given to the high precedence cell stream.
Bellcore and Brooklyn Polytechnic Institute in 1992 proposed that service classes be treated as precedence levels and that a cell scheduling policy be implemented at the output buffer of the ATM switch to ensure that the traffic descriptor of the precedence levels are met (Chao, J. and Uzun, N., "An ATM Queue Manager with Multiple Delays and Loss Priorities", IEEE Globecom, 1992). Toshiba has proposed the same concept (see Esaki, H., "Call Admission Control Method for ATM Networks", SuperComm/ICC, 1992). However, while providing precedence for certain cells is helpful for guaranteeing QoS for certain service classes, these schemes do not guarantee that lower precedence calls will not cause a higher precedence user to fail to gain access to the network.
In 1995 AT&T Bell Labs proposed multiplexing output buffers and assigning each output buffer a precedence level (Elwalid, A. and Mitra, D., "Analysis, Approximations, and Admission Control of a Multi-Service Multiplexing System with Priorities", IEEE, 1995). Buffer access to the switch output would be determined by the precedence level of the buffer and the status of higher precedence buffers. This approach has several disadvantages. First, the buffer sizes are pre-allocated. Second, precedence is based solely upon cell traffic already on the network, as no consideration is made of the precedence levels of the cell traffic attempting to be connected to the network which may be higher than other cell traffic already on the network.
Columbia University and ChipCom have proposed dedicating service buffers coupled with dynamic precedence-based allocation within each buffer (Dailianas, A. and Bovopoulos, A., "Design of a Real-Time Call Admission Control Algorithm with Priority Support", IEEE, 1995). As in the Nippon proposal, two levels of precedence are indicated by the CLP bit. The network measures the occurrences of high precedence cells based on their CLP bit value and dynamically adjusts the amount of space dedicated to high precedence cells within each service buffer. However, like each of the other proposed schemes, precedence is only assigned to cells on accepted traffic, and is not assigned to VCCs. Preemption of a low precedence VCC for a higher precedence VCC is not provisioned.