In packet-switched networks, routers, switches, and gateways are examples of the types of network devices that effect delivery of packets from source endpoints to destination endpoints. These routing devices receive network packets at one or more ingress ports. For each received packet, the routing device examines one or more of the packet's headers and then determines an egress port that will move the packet most effectively towards its destination. The device switches the packet to that egress port, where it will be placed back on the network to wend its way towards the destination endpoint.
Routing devices generally place a packet in an internal queue until the packet can be switched to the appropriate egress port, and/or until the egress port has transmitted earlier-arriving packets. One simple way to implement an internal queue is with a single FIFO (first-in, first-out) buffer. The main drawback of this method is that it cannot prioritize one type of traffic differently than any others; for instance, a burst of low-priority traffic from one source will delay higher-priority traffic arriving slightly later in time, merely because the low-priority traffic entered the FIFO buffer first.
Several multiple-queue schemes have been devised to allow more intelligent prioritization of queued packets. In some of these schemes, a classifier assigns a priority to each arriving packet, and sends the packet to a FIFO queue dedicated to the traffic class that includes all packets with that priority. For instance, FIG. 1 shows an ingress packet stream 22, each packet in the stream representing one of four different classes as indicated by the symbolic shading applied to each packet in stream 22. Classifier 24 “sorts” the packets by class into one of four FIFO queues A, B, C, D in queue memory 25.
Once packets have been classified and assigned to appropriate queues, a scheduler recombines the packets into an egress packet stream. “Priority queuing” (PQ) is perhaps the most straightforward way of doing this. With PQ, queues are ranked highest to lowest in priority. Anytime that a packet waits in the highest-priority queue, it will be serviced next. When the highest-priority queue is empty, the next-highest priority queue is serviced exclusively until it is also empty, and so on. This technique works well for higher-priority traffic, but provides no guarantee as to when, if ever, lower-priority traffic will receive service.
Scheduler 26 of FIG. 1 provides an alternative to PQ, known as “weighted round robin” (WRR) queuing. WRR assigns a target link utilization ratio (LUR) to each queue. For instance, in FIG. 1, queue A has an LUR of 4/10, meaning that queue A receives 40% of the bandwidth on egress packet stream 28. WRR scheduler 26 visits each queue in memory 24 in round-robin fashion, dwelling long enough on each queue to dequeue a number of bytes from that queue proportional to that queue's LUR. For instance, the value N in FIG. 1 could be set to 3000 bytes. Using the queue LUR values, scheduler 26 dequeues 12,000 bytes from queue A, then 6000 bytes from queue B, then 3000 bytes from queue C, and finally 9000 bytes from queue D. The scheduler then returns to queue A and repeats the process.
Another alternative to PQ and WRR is “weighted fair queuing” (WFQ). A WFQ scheduler uses a flow-based classification technique. That is, instead of lumping all packets into one of several classes, packets are classified according to their source and destination address and port, upper-layer protocols, etc., such that packets from the same end-to-end packet “flow” or “stream” are placed in a queue unique to that flow. WFQ can weight each flow according to that flow's class or quality-of-service—in general, however, packets from low-volume streams propagate quickest through WFQ scheduling, and high-volume streams share the remaining bandwidth fairly.