In many telecommunications applications, a scheduler is used to resolve contention among multiple tasks competing for a limited resource. For example, such a scheduler is commonly used in a network processor to schedule multiple traffic flows for transmission over a specific transmission bandwidth.
A network processor generally controls the flow of data between a physical transmission medium, such as a physical layer portion of a network, and a switch fabric in a router or other type of switch. An important function of a network processor involves the scheduling of cells, packets or other data blocks, associated with the multiple traffic flows, for transmission to the switch fabric from the physical transmission medium of the network and vice versa. The network processor scheduler performs this function.
An efficient and flexible scheduler architecture capable of supporting multiple scheduling algorithms is disclosed in U.S. patent application Ser. No. 10/722,933, filed Nov. 26, 2003 in the name of inventors Asif Q. Khan et al. and entitled “Processor with Scheduler Architecture Supporting Multiple Distinct Scheduling Algorithms,” which is commonly assigned herewith and incorporated by reference herein.
It is often desirable for a given scheduling algorithm implemented in a network processor or other processing device to be both simple and fair. Simplicity is important because the processing device hardware typically does not have a large amount of time to make a given scheduling decision, particularly in a high data rate environment. A good scheduler should also be fair. For example, it may allocate the bandwidth according to the weights of the users, with the higher-priority users getting more bandwidth than lower-priority users.
An example of a simple and fair scheduling algorithm is the Weighted Round-Robin (WRR) scheduling algorithm. Assume that in a given telecommunications application there is a number of users competing for one resource, where the resource can process one data block in each timeslot. The scheduler must decide which user can send one data block to the resource for processing in each timeslot. Each user has a weight to indicate its priority. The user with larger weight has higher priority. Under ideal conditions, the services received by the users should be proportional to their weights. A WRR scheduler serves the users in proportion to their weights in a round-robin fashion.
A modified version of the WRR scheduling algorithm is known as Deficit Round-Robin (DRR). In DRR scheduling, the users have respective deficit counters, and a particular user is served on a given pass of the scheduler only if its corresponding deficit counter is greater than or equal to the size of the data block to be transmitted by that user. If the deficit counter for the user is lower than the size of the data block to be transmitted, the user is skipped on the given pass but its deficit counter is increased by a designated amount referred to as a quantum. Also, the deficit counters of users transmitting data blocks on the given pass are decreased by the size of their respective transmitted data blocks.
Various drawbacks of WRR, DRR and other conventional scheduling algorithms are addressed by the techniques disclosed in U.S. patent application Ser. No. 10/903,954, filed Jul. 30, 2004 and entitled “Frame Mapping Scheduler,” Ser. No. 10/998,686, filed Nov. 29, 2004 and entitled “Frame Mapping Scheduler with Compressed Mapping Table,” Ser. No. 11/415,831, filed May 1, 2006 and entitled “Wireless Network Scheduling Methods and Apparatus Based on Both Waiting Time and Occupancy,” Ser. No. 11/415,546, filed May 1, 2006 and entitled “High-Throughput Scheduler with Guaranteed Fairness for Wireless Networks and Other Applications,” Ser. No. 11/427,476, filed Jun. 29, 2006 and entitled “Credit-Based Wireless Network Scheduling,” Ser. No. 11/461,181, filed Jul. 31, 2006 and entitled “High-Throughput Scheduler with Integer-Based Eligible Number Initialization,” and Ser. No. 11/468,917, filed Aug. 31, 2006 and entitled “Scheduling Methods and Apparatus Based on Adjusted Channel Capacity,” all of which are commonly assigned herewith and incorporated by reference herein.
Despite the considerable advances provided by the scheduling techniques disclosed in the above-cited references, a need remains for further improvements. For example, many conventional network processors treat the output bandwidth as the only resource to be scheduled. Such an arrangement is appropriate in applications in which bandwidth is the primary resource bottleneck. However, the emergence of new applications such as residential gateways has led to increasing amounts of available bandwidth, via Gigabit Ethernet for example, while device processing power remains limited in such applications due to cost and size concerns. Thus, the network processor itself may in some cases become the primary resource bottleneck, resulting in underutilization of the output bandwidth.
This situation is of particular concern for traffic that is processor intensive, i.e., consumes large amounts of the processor resource. Processor intensive traffic typically involves small packet sizes, such as voice-over-IP (VoIP) traffic, and the header processing associated with such traffic can exacerbate the processor resource bottleneck. In fact, it is possible that a malicious user could attack a router or switch by generating large numbers of small-size packets having complex headers, thereby overwhelming the network processor and preventing legitimate users from accessing the output bandwidth.
Conventional approaches to dealing with allocation of two different resources fail to provide an adequate solution. These approaches generally attempt to allocate both resources fairly, or to combine the two resources and determine a single fair allocation. However, it is very difficult to achieve fairness in situations such as the processor resource bottleneck described above, where the processing power needed for a given packet is generally not known before the packet has been processed. Without that information, any fairness criteria defined for processor resource allocation will tend to be inaccurate.
Feedback control may also or alternatively be used in order to backpressure input traffic that consumes too much of a given resource. However, the input traffic from a Gigabit Ethernet port may contain thousands of flows, which makes it impractical to backpressure only some of the flows without affecting others.
Accordingly, it is apparent that a need exists for improved scheduling techniques which can avoid the problems associated with processor resource bottlenecks while also efficiently scheduling for the available output bandwidth.