1. Field of the Invention
The invention relates generally to data communications, and more particularly to data switching arrangements.
2. Description of Related Art
Recent years have witnessed a marked increase in traffic volume for wide-area networks (WANs), such as the Internet, as well as for local-area networks (LANs), such as on-premises Ethernet systems. This increase in traffic volume is caused by new technologies, migration from a paradigm of centralized computing to one of distributed computing, and the proliferation of a wide variety of new applications. Also, the rapid pace of technological growth is witnessing an ever-increasing amount of interdisciplinary work in which groups of individuals from diverse technical backgrounds come together to collaborate on a single project. Data networks designed for traditional communities of interest, such as departments, are no longer adequate. The community of interest has now expanded significantly, and, furthermore, the boundaries of the community of interest are no longer static and may, in fact, change from day to day.
Designing a communications network for a large, ever-changing community of interest poses problems that are not adequately addressed by presently-existing data communications systems. In addition to the increased traffic volume of a relatively large network, a bewildering variety of co-existing applications such as telephony, video and computer data networking must often be supported. In general, each of these applications is characterized by a unique set of properties and requirements. The network must therefore be equipped to convey a plurality of applications among various endpoint devices. This challenge has resulted in prior art approaches moving away from more traditional methods of operation, involving routers and bridges, towards more flexible operational modes that utilize on-premises switching arrangements.
Some applications are relatively immune to degradations caused, for example, by data delays and/or losses in the network, whereas others are very vulnerable to these degradations. For instance, if an application is virtually immune to degradation, this may signify that an endpoint device receiving the application data stream will be able to produce humanly intelligible output during the degradation. On the other hand, if the stream is vulnerable to degradation, this means that any degradation will have a relatively significant impact on the output of the endpoint device during the degradation, and that the intelligibility of the output due to this degradation may be quite poor. To complicate matters further, a given stream may be immune to some types of degradation, but very vulnerable to other types of degradation. For example, file transfer applications, and other applications generally known to those skilled in the art as TCP/IP applications, are relatively insensitive to delay, but are relatively vulnerable to data losses.
Existing networks utilize data flow control techniques that do not distinguish between the aforementioned varied application data types. In other words, all data are treated in the same manner, irrespective of the effect that a data stream degradation will have on that data type, and irrespective of the effect that such a degradation will have on the quality of service perceived by an endpoint device user. Prior art flow control methods provide no effective mechanism for advantageously exploiting the unique properties of each of these diverse data types.
One mechanism for exploiting the unique characteristics of a plurality of data types is to define one or more data priority levels. Data priority can be defined with reference to quality of service considerations, which considers the effect that data delay and/or loss will have on the intelligibility of the output as perceived by a typical endpoint device user. If high-priority data are delayed and/or lost, the effect on intelligibility is relatively great, whereas if low-priority data are delayed and/or lost, the effect on intelligibility is relatively insignificant. For example, consider a network that is equipped to switch ATM (asynchronous transfer mode) data. In ATM, five classes of data service have been defined, including CBR (constant bit rate) data, real-time VBR (variable bit rate) data, non-real-time VBR data, ABR (available bit rate) data, and UBR (unspecified bit rate) data. CBR data is relatively sensitive to delays and losses, meaning that such delays and/or losses degrade the quality of service to a relatively significant degree, whereas UBR data are relatively insensitive to delays and/or losses, and the quality of service is undegraded relative to CBR data. Therefore, CBR data packets may be conceptualized as high-priority data traffic, and the UBR data packets as low-priority data traffic.
In general, multipriority data traffic is traffic that includes representations of different types of data as, for example, CBR data, VBR data, ABR data, and UBR data. This data traffic is typically organized into data packets. With respect to switching delays and losses, prior art communications networks do not distinguish one type of data from another. What is needed is some mechanism for distinguishing high-priority data packets from low-priority data packets for purposes of data flow control.
Flow control techniques operate in the environment of data switching devices. As an example, consider the switch architecture shown in FIG. 1. A switch fabric 102 is provided in the form of a dual-bus architecture having a transmit bus 104 and a receive bus 106. The dual-bus architecture of FIG. 1 is shown for illustrative purposes only, as other types of switch fabric architectures do not employ dual busses, and still other types of switch fabric architectures do not employ any busses. Although the techniques disclosed herein are described in the context of a dual-bus architecture, this is for illustrative purposes only, it being understood that these techniques are also applicable in the operational environments of other types of switch architectures including, for example, a shared memory architecture.
The transmit bus 104 and the receive bus 106 are adapted for connection to one or more port cards 108, 109, 113. The port cards 108, 109, 113 transmit on transmit bus 104 and receive from receive bus 106. Receive bus 106 is separate from transmit bus 104, but the transmit bus 104 is looped back onto the receive bus 106 through a loop-back circuit 111 located at an end of the transmit bus 104 and an end of the receive bus 106. These port cards are typically equipped to handle a wide variety of interfaces such as ATM (Asyncronous Transfer Mode) interfaces, LAN (local area network) interfaces such as Ethernet, and TDM (time division multiplexed) circuit interfaces. The architecture set forth in FIG. 1 is often employed to provide access hubs and/or backbone hubs in the operational environments of campuses, private networks, and corporate networks.
Access to the transmit bus 104 may be achieved through the use of a technique commonly known as a multipriority round-robin discipline, and this technique is performed among active port cards 108, 109, 113. Port cards 108, 109, 113 interface to the receive bus 106 and to the transmit bus 104 bus via a high-speed integrated circuit referred to as the bus interface chip (BIC) 110. The BIC 110 includes a first high-speed first-in, first-out (FIFO) staging buffer 112 for transmission on the transmit bus 104, a second high-speed FIFO buffer 114 for receipt from the receive bus 106, and a processor 115. Port cards 108, 109, 113 each include slow-speed memory 116, which may be provided in the form of random-access memory (RAM), and which could be, but is generally not, integrated into BIC 110. Slow-speed memory 116 serves as the primary buffering area to and from the actual physical communications ports of the BIC 110. One function of the FIFO staging buffer 112 is to serve as a staging area for data sent from a port card to the transmit bus 104, and one function of the high-speed FIFO buffer 114 is as a rate converter (from the bus transmission rate to the communications port transmission rate) for data received from the receive bus 106. Due to the large potential difference in data transfer rates between the receive bus 106 and a port card (e.g., port card 108), FIFO buffer 114 may overflow. Therefore, what is needed is a data flow control technique that adequately compensates for any disparities in data transfer rates out of the port cards 108, 109, 113 on the one hand, and into the port cards from the receive bus 106 on the other, while respecting data priorities and the unique characteristics of applications mapped to these priorities.