The problem of congestion control, or more generally traffic management, is quite significant for packet switching networks. Congestion in a packet switching network stems partly from the uncertainties involved in the statistical behavior of many types of traffic sources. Congestion is also due to the complicated way in which different traffic streams interact with each other within a packet network. Congestion control and traffic management in a broadband integrated service environment is further complicated by the high-speed of transmission and by the diverse mix of traffic types and service requirements encountered in such networks.
Most of the strategies proposed for congestion control in conventional narrowband data networks are closed-loop in nature. This means that feedback information (e.g. acknowledgements) from the destination node or some intermediate nodes are used to decide about the admission of new packets into the network or about forwarding packets from one node to the next. At broadband transmission rates, packet duration, or the time required by a link to serve a packet, is very short. Therefore, propagation delays, when measured in terms of packet duration, are much higher in broadband networks than in conventional narrowband networks. Consequently, in a broadband network, any closed-loop or feedback-based control mechanism will tend to work more slowly and may be unable to keep up with the pace of events occurring in the network.
Services that may be encountered in a broadband integrated services network range from data and voice communications to file transfers, high speed circuit emulations, and different types of video services. These services represent a wide variety of traffic characteristics (e.g. average rate and burstiness) and a wide variety of service requirements (e.g. end-to-end delay, delay jitter, packet loss probability, call blocking probability, and error rate). The tasks of resource management in general and congestion control in particular are more involved in this environment than in a conventional narrowband network. In a wideband integrated service network, control algorithms, besides having to deal with a wide range of traffic characteristics, need to be effective in yielding predictable network behavior and must be flexible in accommodating different service requirements. Traffic management algorithms in a broadband network should also be simple in terms of the processing power required for their execution. The increase in data processing speeds has not kept up with the fast growth of data transmission speeds. Thus, packet processing ability in the network nodes has more and more become the scarce resource. Therefore, the processing required for any control function should be kept to a minimum.
The above-identified patent application discloses a congestion control strategy which has several highly desirable features: it maintains loss-free communications, it provides bounded end-to-end delay, and it is very simple to implement. These features make the strategy an attractive solution for the transmission of real time traffic and other forms of time-critical information in a broadband packet network.
The congestion control strategy of the above-identified patent application is composed of two parts: a packet admission policy imposed for each connection at its source node (i.e. a policy which controls the admission of packets into the network) and a service discipline at the switching nodes named stop-and-go queuing.
Central to both parts of the strategy is the notion of time frames. For this reason the congestion control strategy of the above-identified patent application is known as a framing strategy. On each link in the network, time intervals or frames of durations T are defined. The frames may be viewed as propagating from the transmitting end of a link to the receiving end of the link. Illustratively, the frames may be defined such that the frames on the incoming links at each node are synchronous or the frames on the outgoing links at each node are synchronous.
A stream of packets is defined to be (r,T) smooth when, for fixed length packets, the number of packets in each frame of duration T is bounded by r.T, where r is a rate measured in packets/sec. The packet admission part of the congestion control strategy is based on the foregoing definition of smoothness. In particular, each connection k in the network has a source node and a destination node. The source node for connection k is the network node via which packets belonging to connection k enter into the network from an end user. After a transmission rate r.sub.k is allocated and reserved for a connection k along its path to its destination, the admission of packets to the network from this connection is required to be (r.sub.k,T) smooth. This means that during each frame of duration T, the source node for the connection k admits into the network no more than r.sub.k. T packets and any additional packets are not admitted until the next frame starts. Alternatively, the allocated rate r.sub.k may be required to be large enough so that the stream of packets arriving at the source node and admitted to the network always maintains the (r.sub.k,T) smoothness.
The above-identified packet admission policy guarantees that the traffic stream of connection k, with an allocated rate r.sub.k, is (r.sub.k,T) smooth upon admission to the network. If this smoothness property continues to hold as the packet stream of each connection arrives at each intermediate switching node, then the problem of congestion is indeed resolved. Unfortunately, this is not often the case. In a network which utilizes conventional first-in, first-out (FIFO) queuing at the nodes, as packets of a connection proceed from intermediate node to intermediate node, they tend to cluster together and form longer and longer bursts, which bursts violate the original smoothness property.
The stop-and-go queuing technique is an alternative to conventional FIFO queuing which completely solves this problem. In particular, stop-and-go queuing guarantees that once the (r.sub.k,T) smoothness is enforced on all connections k at their source nodes, the property will continue to hold at any subsequent switching node. To facilitate discussion of the stop-and-go queuing scheme, it is useful to consider arriving and departing frames. In particular, at each node, the arriving frames are the frames of duration T on the incoming links and the departing frames are the frames of duration T on the outgoing links. Over each link in the network, the frames are viewed as traveling with the packets from one end of the link to the other. Therefore, when there is a propagation delay on link l and a processing delay at the receiving end of link l, which delays have a combined duration of .tau.l, the frames at the receiving end of link l will be .tau.l seconds behind the frames at the transmitting end.
Accordingly, as indicated above, the frames may be defined so that the arriving frames on all incoming links of each node are synchronous with respect to each other or so that the departing frames on all outgoing links of each node are synchronous with respect to each other. However, in general, at a node, the arriving and departing frames are asynchronous with respect to each other.
The stop and go queuing technique is based on the following rule: a packet which arrives at a node on an incoming link during an arriving frame f does not become eligible for transmission from the node until the beginning of the first departing frame on the desired outgoing link which starts after f expires.
As a result of this rule, it follows that:
(i) a packet that has arrived at some node via an incoming link during a particular arriving frame will be delayed and then be transmitted on the appropriate outgoing link in the first departing frame on the outgoing link which starts after the end of the particular arriving frame on the incoming link. (Hence the name stop-and-go queuing); PA1 (ii) the packet stream of each connection k will maintain its original (r.sub.k,T) smoothness throughout the network; and PA1 (iii) a buffer with a capacity of at most EQU B.sub.l =3C.sub.l T (1) PA1 (B) Any eligible type-p packet has non-preemptive priority over a type-p-1 packet. This means that a higher priority type-p packet which becomes eligible for service by an outgoing link of a node while a lower priority type-p-1 packet is being served, waits until service of the type-p-1 packet is completed before gaining priority over the outgoing link. PA1 (i) any type-p packet that has arrived at a node via an incoming link during an arriving type-p frame f, will receive service on a outgoing link during the first departing type-p frame which starts after the incoming type-p frame f has ended; PA1 (ii) the packet stream of each connection will maintain its original smoothness property along its entire path through the network, i.e., for a type-p connection with an allocated transmission rate r.sub.k, the packet stream maintains the (r.sub.k,T.sub.p) smoothness throughout its path to its destination; PA1 (iii) a buffer capacity of EQU B.sub.l =3.SIGMA.C.sub.l P.sub.T.sbsb.p (4)
per link l is sufficient to eliminate any chance of buffer overflow, where C.sub.l is the capacity of link l.
In a network which utilizes the above described framing congestion control strategy (including the above-described packet admission policy and the above-described stop-and-go queuing strategy), it is useful to consider the tradeoff between queuing delays and flexibility in bandwidth allocation. More particularly, in the above-described congestion control strategy, the queuing delay at all nodes for each connection is given by EQU Q=.alpha.HT (2)
where .alpha. is a constant between 1 and 2 and H is the number of links in the connection (e.g. a connection which utilizes two links has one intermediate node). Equations 1 and 2 indicate that by choosing a sufficiently small frame size T, one can arbitrarily reduce queuing delays as well as buffer requirements in the network.
In addition, for fixed length packets of L bits in length, the incremental unit of bandwidth allocation is EQU .DELTA.r=L/T bits/sec (3)
Equation (3) indicates that a small frame size T (and the resulting small buffer requirements and queuing delays) can only be achieved at the expense of a large incremental unit of bandwidth allocation and thus decreased flexibility in bandwidth allocation.
Briefly stated, it is a shortcoming of the above-described framing congestion control strategy that small queuing delays and small buffer capacity requirements can only be achieved at the expense of decreased bandwidth allocation flexibility.
Accordingly, it is an object of the present invention to provide a modification of the above-described framing congestion control strategy so that it is possible to enjoy small queuing delays for certain connections while still being able to allocate bandwidth using arbitrarily small incremental bandwidth units for other connections.
It is a further object of the present invention to provide a framing congestion control strategy which utilizes multiple frame sizes so that some connection can enjoy small queuing delays while other connections can be allocated bandwidth in small incremental units.