This invention relates to a flexible architecture of a telecommunications system using datagrams, ie., packets, such as ATM.
The volume of voice and voice-band calls is increasing markedly, and network providers are being challenged to offer these xe2x80x9cplain old telephonexe2x80x9d services at competitive prices. ATM presents an opportunity to reduce costs, and is therefore being considered for carrying circuit-switched voice traffic. Conventionally, a circuit-switched network is managed by formulating a logical view of the network that includes a link between most pairs of network switches, and the network is managed at the logical level. The logical view does not necessarily correspond to the actual, physical, network. The logical connections over which routing is performed ride on a facility network. The facility level contains the physical switches and transmission resources. The connections demanded at the logical level are mapped into demands on the facility network. Routes that appear as direct at the logical level may pass through many cross-connects at the physical level.
The partitioning of a circuit-switched network into logical and physical layers results in significant inefficiencies. Physical diversity is difficult to plan for such networks due to the indirect mapping between the logical and physical layers, and such networks have high operations costs due to the constant need to resize trunk groups between switch pairs as the load changes or shifts. Also, sharing of bandwidth is limited to the possible alternate paths at the logical layer. Finally, such networks are difficult to scale as network traffic increases because each switch that is added to the network must be interconnected to all other switches at the logical layer, trunks on existing switches must be re-homed to the new switch, and the routing tables at all other switches in the network must be updated. All of this creates substantial operational load on the network provider. Since facilities are in units of T3 capacity, fragmentation of trunk groups also increases with the size of the network.
ATM networks have the potential to eliminate some of the inefficiencies in traditional circuit-switched networks. In an ATM implementation that creates circuit connections, the logical and physical network separation may or may not be. maintained. Voice calls in such a network may be treated as ATM virtual circuits, which may be either Constant Bit Rate (CBR) or Variable Bit Rate (VBR) arrangements, depending on the voice coding scheme. These virtual circuits may be set up using standardized ATM setup procedures and routing protocols - as, for example, in the Private Network-to-Network Interface (PNNI) specification. However, the standard procedures of an ATM network require the ATM switches in the network to perform a substantial amount of computation, which is burdensome and which makes it difficult to operate the network at high load volumes.
The ATM standard defines a Connection Admission Control (CAC) to manage node-by-node call admission based on knowledge of congestion at the node. The CAC is used to insure that calls receive their Grade-of-Service guarantees on call- and cell-level blocking. The Private Network-to-Network Interconnection (PNNI) protocol in ATM uses a Generalized Call Admission Control (GCAC) to perform the call admission control function at network edges based on knowledge of congestion internal to the network. Both of the described ATM schemes provide capacity management in a distributed manner.
The problems associated with prior solutions for implementing ATM in a large-scale voice network are overcome by providing an efficient means by which capacity in the network is more fully shared without adversely affecting call setup latency, and at the same time simplifying network operations. This is achieved by performing the functions of route setup, route allocation, and capacity management in an ATM network at the edges of the ATM network. By xe2x80x9cedgesxe2x80x9d what is meant is the interface between an ATM switch of the network and other than another ATM switch of the network; for example, the interface between each ATM switch and customers. In accordance with the principles disclosed herein, the edges contain nodes that form the interface between the backbone ATM switches and the link(s) that interconnect them (i.e., the ATM backbone network) and the outside world. These nodes comprise controllers and other apparatus that in some cases may be incorporated in, or connected as adjuncts to, the ATM switches.
Edge nodes assign calls to virtual paths based on the destination of the call and the current load status of each of a number of preselected paths. Thus, each call is assigned a VPI (Virtual Path Identifier) corresponding to the path chosen and a VCI (Virtual Circuit Identifier) corresponding to the identity of the call at that edge node. The ATM backbone nodes route calls based solely on the VPI. Destination-based routing allows VPIs to be shared among routes from different sources to the same destination.
Capacity management and load balancing is achieved through a Fabric Network Interface (FNI) that is present in each of the edge nodes along with a Centralized FNI (CFNI), that maintains backbone link status. The FNI is responsible for keeping track of the load on each access link from its edge node to the backbone ATM switch it homes onto, as well as the load on each backbone link of the calls it originated. This load is measured in normal bandwidth requirements for CBR services and could be measured in effective bandwidths for other services. The FNI is also responsible for periodically sending its information to the CFNI. The CFNI collects the received information and calculates the bandwidth used on each backbone link. It then computes a link status for each access and backbone link and sends this status information to each FNI . This information assists the FNIs in carrying out their tasks.
A network is provided having a plurality of interconnected backbone switches, where each backbone switch is connected to at least one other backbone switch by a xcex2-link. The network also has a plurality of edge nodes, where each edge node is connected to at least one backbone switch by an xcex2-link. A routing map is provided that defines a first pre-provisioned path that leads from a first of the backbone switches, along one or more xcex2-links, to a second of the backbone switches, then along an xcex1-link to a destination edge node, which is one of the plurality of edge nodes. The first pre-provisioned path also includes a number of intermediary backbone switches, i.e., backbone switches in addition to the first and second backbone switches, equal to the number of xcex2-links included in the pre-provisioned path minus one. The first pre-provisioned path is associated with a first virtual path identifier (VPI). A routing status database, logically connected to each of the edge nodes, maintains the routing map and tracks the congestion status of each xcex1-link and each xcex2-link in the network. The first VPI defines a first path from a first source node, which is one of the plurality of edge nodes, to the destination edge node. This first path runs from the first source edge node to a backbone switch selected from the group consisting of the first backbone switch and the intermediary backbone switches included in the first pre-provisioned path, and then along the first pre-provisioned path to the destination edge node. The first VPI also defines a second path from a second source node, which is one of the plurality of edge nodes, to the destination edge node. This second path runs from the second source edge node to a backbone switch selected from the group consisting of the first backbone switch and the intermediary backbone switches included in the first pre-provisioned path, and then along the first pre-provisioned path to the destination edge node. As a result, destination based routing to the destination node is implemented. The first VPI defines a plurality of paths from a plurality of edge nodes to the destination node, similar to the way that the branches of a tree converge to a single trunk. A method of using the network is also provided.
A method is provided for setting up a communication from a first edge node, across a network that uses direct virtual path routing, to a second edge node. The first edge node receives a request to set up the communication. The second edge node is identified as the destination of the communication, based on the request. A first virtual path identifier (VPI) is obtained that defines a first path from the first edge node across the network to the second edge node. A second VPI is obtained that defines a second path from the second edge node across the network to the first edge node. The first and second VPIs are selected by a routing status database. A first virtual channel identifier (VCI) within the first VPI is selected. A second virtual channel identifier (VCD within the second VPI is selected. Data is transmitted from the first edge node to the second edge node using the first VPI and first VCI, and from the second edge node to the first edge node using the second VPI and second VCI. Switches adapted to carry out the method are also provided.