Asynchronous Transfer Mode (ATM) has emerged as a very promising transport technique for supporting services of diverse bit-rate and performance requirements in future broadband networks. High-speed packet switches are essential elements for successful implementation of ATM networks. If a significant population of network users are potential broadband-service subscribers, high-capacity packet switches with a large number of input and output ports are required.
Two basic approaches in large packet switch designs emerge as a result of recent research activities. Both approaches concentrate on scalable designs that construct a large switch using smaller switch modules. The first approach strives to avoid internal buffering of packets in order to simplify traffic management. Examples in this category are the Modular switch, the generalized Knockout switch, and the 3-stage generalized dilated-banyan switch (with no buffering at the center stage).
The second approach attempts to build a large switch by simply interconnecting switch modules as nodes in a regularly-structured network, with each switch module having its own buffer for temporary storage of packets. A notable example in this category is illustrated in FIG. 1a and described in the article by H. Suzuki, H. Hagano, T. Suzuki, T. Takeuchi, and S. Iwasaki, "Output-Buffer Switch Architecture For Asynchronous Transfer Mode," CONF. RECORD, IEEE ICC '89, pp. 99-103, June 1989. As described in the article and as illustrated in FIG. 1a, output-buffered switch modules 10 are connected together as in the 3-stage Clos circuit-switch architectures. The switch modules 10 have internal buffers 12 at their outputs. Typically, a packet must pass through several queues before reaching its desired output in these switch architectures.
Because of the simplicity of switches in the second approach, they have been the potential focus of several switch vendors. However, these switches necessitate more complicated network control mechanisms, since more queues must be managed. In addition, for the Clos architecture, routing within the switching network becomes an issue because there are multiple paths from any input port to any output port as illustrated in FIG. 1b.
Things become even more complicated if multicast (point-to-multipoint) connections, an important class of future broadband services, are to be supported. For communications networks that use these switching networks for switching in their nodes, each node should be treated as a "micronetwork" rather than an abstract entity with queues at the output links only, as is done traditionally. Internal and output buffers 14 are provided as illustrated in FIG. 1b.
An open question is to what extent the internal buffers in the micronetwork would complicate traffic management and whether routing algorithms for call setups would require unacceptably long execution times. It is assumed that all switch modules in the micronetwork have multicast capability. The multicast routing problem in a 3-stage Clos switching network can be compared with the multicast routing problem in a general network. Three features associated with routing in the Clos network are:
1 Necessity for a very fast setup algorithm; PA1 2. Large numbers of switch modules and links; and PA1 3. Regularity and symmetry of the network topology.
It is necessary to have an algorithm that is faster and more efficient that those used in a general network because the Clos switching network is only a subnetwork within a overall communications network.
From the viewpoint of the overall network, the algorithm performed at each Clos switching network is only part of the whole routing algorithm. Adding to the complexity is the highly connected structure of the Clos network, which dictates the examination of a large number of different routing alternatives. The Clos network is stage-wise fully connected in that each switch module is connected to all other switch modules at the adjacent stage.
As an example, for a modest Clos network with 1024 input and output ports made of 32 inputs.times.32 outputs switch modules (with n=32, m=32, p=32 as illustrated in FIG. 1a), the numbers of nodes and links are 96 and 3072, respectively. Thus, algorithms tailored for a general network are likely to run longer than the allotted call-setup time. Both features 1 and 2 above argue for the need for a more efficient algorithm, and feature 3, regularity of the network topology, may lend itself to such an algorithm.
In the article entitled "Nonblocking Networks for Fast Packet Switching" CONF. RECORD, IEEE INFOCOM '89, pp. 548-557, April 1989, by R. Melen and J. S. Turner, the relationship between various switch parameters that guarantee an ATM Clos network to be nonblocking are derived. In the ATM setting, each input and output link in a switch contains traffic originating from different connections with varying bandwidth requirements.
An ATM switch is said to be nonblocking if a connection request can find a path from its input to its targeted output and the bandwidth required by the connection does not exceed the remaining bandwidths on both the input and output. What was not addressed in Melen and Turner is the issue of routing. Even though the switch used may be nonblocking as defined, a connection may still suffer unacceptable performance in terms of delay and packet loss if the wrong path is chosen. This is due to contention among packets for common routes in the ATM setting where packet arrivals on different inputs are not coordinated. Consequently, regardless of whether the switch is nonblocking, some routes will be preferable because they are less congested.