The state of communications technology, particularly that which affects the Internet, is currently in flux and subject to rapid and often uncoordinated growth. The ubiquity and diversity of personal computers and set-top boxes has placed significant pressure on the providers of communications system infrastructure to accommodate the alarming increase in the number of new users that demand immediate access to Internet and other network resources. The rapid development of new and sophisticated software made available to users of such services places additional demands on system infrastructure.
Conducting commerce over the Internet and other networks is a practice that is gaining acceptance and popularity. By way of example, traditional on-line services, such as those offered by Internet providers, typical charge customers a monthly fee for access to basic services and resources, such as proprietary and public databases of information. Such traditional service providers also advertise any number of products or services which are purchasable on-line by the user.
Other forms of Internet commercialization currently being considered or implemented include offering of video and audio conferencing services, and a variety of other real-time and non-real-time services. The providers of these services, as well as the providers of communications system infrastructure, are currently facing a number of complex issues, including management of network capacity, load, and traffic to support real-time, non-real-time, and high-bandwidth services, and implementing a viable billing scheme that accounts for the use of such services.
The communications industry is expending considerable attention and investment on one particular technology, referred to as asynchronous transfer mode (ATM), as a possible solution to current and anticipated infrastructure limitations. Those skilled in the art understand ATM to constitute a communications networking concept that, in theory, addresses many of the aforementioned concerns, such as by providing a capability to manage increases in network load, supporting both real-time and non-real-time applications, and offering, in certain circumstances, a guaranteed level of service quality.
A conventional ATM service architecture typically provides a number of predefined quality of service classes, often referred to as service categories. Each of the service categories includes a number of quality of service (QoS) parameters which define the nature of the respective service category. In other words, a specified service category provides performance to an ATM virtual connection (VCC or VPC) in a manner specified by a subset of the ATM performance parameters. The service categories defined in the ATM Forum specification reference hereinbelow include, for example, a constant bit rate (CBR) category, a real-time variable bit rate (rt-VBR) category, a non-real-time variable bit rate (nrt-VBR) category, an unspecified bit rate (UBR) category, and an available bit rate (ABR) category.
The constant bit rate service class is intended to support real-time applications that require a fixed quantity of bandwidth during the existence of the connection. A particular quality of service is negotiated to provide the CBR service, where the QoS parameters include characterization of the peak cell rate (PCR), the cell loss rate (CLR), the cell transfer delay (CTD), and the cell delay variation (CDV). Conventional ATM traffic management schemes guarantee that the user-contracted QoS is maintained in order to support, for example, real-time applications, such as circuit emulation and voice/video applications, which require tightly constrained delay variations.
The non-real-time VBR service class is intended to support non-real-time applications, where the resulting network traffic can be characterized as having frequent data bursts. Similarly, the real-time variable bit rate service category may be used to support “bursty” network traffic conditions. The rt-VBR service category differs from the nrt-VBR service category in that the former is intended to support real-time applications, such as voice and video applications. Both the real-time and non-real-time VBR service categories are characterized in terms of a peak cell rate (PCR), a sustainable cell rate (SCR), and a maximum burst size (MBS).
The unspecified bit rate (UBR) service category is often regarded as a “best effort service,” in that it does not specify traffic-related service guarantees. As such, the UBR service category is intended to support non-real-time applications, including traditional computer communications applications such as file transfers and e-mail.
The available bit rate (ABR) service category provides for the allocation of available bandwidth to users by controlling the rate of traffic through use of a feedback mechanism. The feedback mechanism permits cell transmission rates to be varied in an effort to control or avoid traffic congestion, and to more effectively utilize available bandwidth. A resource management (RM) cell precedes the transmission of data cells, which is transmitted from source to destination and back to the source, in order to provide traffic information to the source.
Although the current ATM service architecture described above would appear to provide, at least at a conceptual level, viable solutions to the many problems facing the communications industry, ATM, as currently defined, requires implementation of a complex traffic management scheme in order meet the objectives articulated in the various ATM specifications and recommendations currently being considered. In order to effectively manage traffic flow in a network, conventional ATM traffic management schemes must assess a prodigious number of traffic condition indicators, including service class parameters, traffic parameters, quality of service parameters and the like. A non-exhaustive listing of such parameters and other ATM traffic management considerations is provided in ITU-T Recommendation I.371, entitled Traffic Control and Congestion Control in B-ISDN, and in Traffic Management Specification, version 4.0 (af-tm-0056.000, April 1996), published by the Technical Committee of the ATM Forum.
Notwithstanding the complexity of conventional ATM traffic management schemes, current ATM specifications and recommendations fail to adequately address the need of service providers for a methodology that provides for accurate and reliable charging of services utilized by user's of the network. Even if one were to assume that a charging scheme that accounts for most or all of the currently defined ATM traffic management properties could be developed, such a scheme would necessarily be complex and would typically require administration by highly skilled operators. The high overhead and maintenance costs to support such a billing scheme would likely be passed on to the network provider and, ultimately, to the network user.
The present invention is applicable in a network service class which incorporates a priority-based quality of service. This service class, hereinafter referred to as the Simple Integrated Media Access (SIMA) service class, provides a network management architecture that is simple in concept and in its implementation, yet adequately addresses the quality of service requirements to support a variety of network services, including real-time and non-real-time services. It also provides for the implementation of a simple and effective charging capability that accounts for the use of network services.
In a SIMA, or non-SIMA network, incoming packets at a network node are received at one of a number of node inputs, and are made subject to node routing, switching, and/or multiplexing functions to direct the packets to their respective node output ports. Multiplexing is the means by which multiple streams of information share a common physical transmission medium. Switching takes multiple instances of a physical transmission medium, and rearranges the information streams between the input and output. A router is a network device operating at multiple layers of the Open Systems Interconnection Reference Model (OSIRM), including the network layer, and is capable of switching and routing data based upon network protocols. These, and similar functions, are performed at the nodes of the network to guide the packets to their respective destinations.
In a network incorporating a priority-based service class, such as SIMA, each network node is equipped with cell scheduling and buffering modules capable of recognizing an incoming packet or cell priority, and accepting or discarding the packet based on an accepted priority associated with that particular node. The accepted node priority may change depending on the level of packet traffic traversing the node. Each output of the node includes such a cell scheduling and buffering module.
A network node configuration that includes routing, switching or multiplexing functions (hereinafter collectively referred to as “switching functions”), followed by cell scheduling and buffering at each node output, performs switching functions on all received packets. When these packets are directed to their appropriate node output, they can be discarded at each node output where the packet priority is insufficient to meet the accepted node priority. While this advantageously allows the higher priority packets to be output from the network node, it does not alleviate the burden of the routing functions which still needs to process packets that may ultimately be discarded.
For example, problems may arise at a SIMA network node where a certain input(s) receives a large amount of SIMA packets that cannot be forwarded due to the overall load of the network node. In such a case, the primary problem is that the routing within a node is performed on all packets even though many of them will ultimately be discarded by the cell scheduling and buffering functions at the node outputs. Thus, the excess, low priority packet traffic to the input(s) of a SIMA core network node could potentially overload the node routing/switching unit.
Similar problem may arise where SIMA packets are forwarded through a conventional network node without SIMA support. It is possible that the particular input(s) serving SIMA traffic may overload the routing function of the conventional network node. This is especially true with an IP router that typically handles routing in a centralized unit based on software. In this case, the routing function can be a bottleneck of the IP router, even without excess SIMA traffic.
Accordingly, there is a need for a system and method for alleviating packet traffic congestion adversely affecting the switching functions of a network node. The present invention therefore reduces the likelihood of the network node becoming overloaded, thereby overcoming this and other shortcomings of the prior art, and offering additional advantages over the prior art.