As communication network capacity continues to increase, so do the bandwidth requirements of contemporary networked applications. This demand fuels an on-going need for network quality of service (QoS) enhancements, particularly those that can accommodate applications with strict bandwidth, loss, and latency sensitivities. Traffic-shaping (TS) is often a reliable component of QoS strategies under these conditions, especially when there is a large discrepancy between available bandwidth and application demand. A traffic-shaper controls the amount of outbound traffic from one or more flows onto a bandwidth-limited network (e.g., from LAN to WAN) and is thus able to support the distribution of limited network resources according to human design. When this design includes reservation of bandwidth for a particular preferred flow, the traffic-shaper can not only guarantee that this minimum requirement is met, but can also constrain the preferred flow at some maximum usage so that other flows are not starved.
Consider the problem of maintaining high quality of service to a priority flow that has strict bandwidth requirements which may be either high or low, and whose transmission latency must be bounded. Network engineers face many challenges when designing for these constraints, especially in guaranteeing bandwidth for the priority flow. Error in estimating requirements can have serious consequences for a critical application, making an a priori “worst-case” analysis the only viable estimation process unless the priority application is itself QoS-aware. However, implementing network priority control using worst-case analysis can have significant impact on low-priority traffic. In particular, the estimated worst-case bandwidth requirement of the priority application may be orders of magnitude greater than what the application actually requires most of the time. The result of implementing the worst-case assumptions can be excessive resource reservation for the highest-priority flow and scarcity of resources for flows that compete with it. This scenario introduces disruption to potentially important network applications when their needs are “trumped” by those of the priority application.
FIG. 1 is a simplified block diagram of a portion 10 of a prior-art network communication system. In FIG. 1, portion 10 includes a traffic shaper 12. Traffic shaper 12 includes an IN interface 14 that typically faces a high-bandwidth data source such as a local area network (LAN), not illustrated. Interface 14 receives segments of data, known in the art as “packets,” from the high-bandwidth source network and communicates them to traffic shaper 12. The packets from the source network are, in general, generated independently or asynchronously, and are intended to be transmitted to destination a network—generally a wide area network (WAN)—having relatively limited bandwidth. Traffic shaper 12 processes the packets and makes them available to the limited-bandwidth network (not illustrated) by way of an OUT interface illustrated as a block 32.
The packets from IN interface 14 of FIG. 1 are applied to an enqueue logic block 16. This enqueue logic contains a bank of filters, the parameters of which have been read by way of a path 17 from the filter/queue/bandwidth (F/Q/B) table 18 at system initialization. Each filter logic may be implemented by a variety of techniques, including regular expressions, bitmasks, or combinations thereof, as known in the art. These logics are designed by the system's operator so that she may apply differential processing to individual packets based on data they contain. Thus, a packet arriving at enqueue logic 16 is applied to each filter sequentially, until it matches one. The packet is marked with the queue number associated with the matching filter by the F/Q/B table 18.
The packets may arrive at enqueue logic block 16 of FIG. 1 and are marked or prioritized by enqueue logic 16. The marked packets are coupled or applied to a multiplexer (MUX) 21. Multiplexer 21 distributes or multiplexes the prioritized data packets to or among the queues 24a, 24b, 24c, . . . , 24N of a set 24 of queues based on their classification markings. Thus, for example, the highest-priority messages may be routed to queue 24a, the next-highest priority messages may be routed to queue 24b, . . . and the lowest-priority messages may be routed to queue 24N.
The data packets in the various queues of set 24 of queues of FIG. 1 are read from the queues by a dequeue logic arrangement illustrated as a block 26. Dequeue logic arrangement 26 is clocked by way of a path 27 from a clocking logic source 38. The dequeue logic 26 reads from the various queues of set 24 such that the outgoing bit rate from a queue conforms with the bandwidth value given for that queue in the filter/queue/bandwidth (F/Q/B) table 18 and coupled to the dequeue logic 26 by way of a path 25. The dequeued packets are applied from block 26 to any additional optional logic illustrated as a block 30, such as that for routing, compression, or encoding. The packets are then applied to OUT interface block 32 for application to the bandwidth-limited network (not illustrated).
Those skilled in the art know that those queues of set 24 of queues of FIG. 1 which are occupied by higher-priority data or messages are read more often, or for a longer duration, than queues occupied by data or messages of lower priority. This allows all the data or message traffic to flow, but at a rate that can be accommodated by the bandwidth-limited network. The net result of the prior-art arrangement of FIG. 1 is to preferentially advance the processing (passage over the network) of higher-priority data packets at the expense of the less-preferred or lower-priority data packets. Under unfavorable conditions, the queues of the less-preferred data may overflow, with the result of loss of data.
Prior-art traffic shapers such as that of FIG. 1 are effective in limiting traffic rates and guaranteeing resource availability for individual applications under worst-case demand assumptions. Effective deployment of such traffic shapers requires prior knowledge of the network resource requirements of impinging applications, information that can only come from a network engineer. In practice, however, the worst-case scenario seldom develops, and the average resource demand network is less than the worst-case predicts. Thus, network utilization is not maximized in order to guarantee proper operation under worst-case demand.
Improved traffic shaping is desired.