The communications industry is rapidly changing to adjust to emerging technologies and ever increasing customer demand. This customer demand for new applications and increased performance of existing applications is driving communications network and system providers to employ networks and systems having greater speed and capacity (e.g., greater bandwidth). In trying to achieve these goals, a common approach taken by many communications providers is to use packet switching technology. Increasingly, public and private communications networks are being built and expanded using various packet technologies, such as Internet Protocol (IP).
A network device, such as a switch or router, typically receives, processes, and forwards or discards a packet. For example, an enqueuing component of such a device receives a stream of various sized packets which are accumulated in an input buffer. Each packet is analyzed, and an appropriate amount of memory space is allocated to store the packet. The packet is stored in memory, while certain attributes (e.g., destination information and other information typically derived from a packet header or other source) are maintained in separate memory. Once the entire packet is written into memory, the packet becomes eligible for processing, and an indicator of the packet is typically placed in an appropriate destination queue for being serviced according to some scheduling methodology.
Certain packet types and classifications of packet traffic must be sent at certain rates for reasons such as the nature of the traffic or a rate level guaranteed by a service provider, wherein the term “rate” as used herein typically refers to real-time rate and/or the resultant effect of any weighted service policy, such as, but not limited to virtual time weights, tokens, credits, events, etc. Thus, a scheduling system must deliver service at a specified rate to a queue containing packets of varying sizes. Furthermore, the rate delivery system must be able to support both real-time rate delivery (e.g., a fixed number of bytes per second), and virtual time rate delivery (e.g., a weighted fraction of the total available bandwidth). Rates must be encoded and stored, and computed and tracked in a fashion that is easily interpreted by hardware and/or software. For calendar-based schedulers, this rate is normally encoded as a quantum (i.e., a number of bytes) served in an interval (i.e., a number of calendar slots).
Quantum/interval encoding has several problems, including variable rate accuracy and a burstiness property. At the fast end of the range, the accuracy is n bytes (i.e., the quantum) in one interval, as the rate can only be changed by varying the number of bytes sent, and hence the accuracy is related to the maximum transmission unit (MTU). Thus, a ten thousand byte MTU would offer one part in ten thousand, but a fifteen hundred byte MTU (e.g., that used in Ethernet) would only offer one part in fifteen hundred. The quantum/interval scheme does not deliver rates smoothly. For example, with a ten thousand byte quantum, a queue sending forty byte packets might need to burst two hundred fifty packets before it was rescheduled. If its interval was greater than one, it would be preferable to reschedule it in intermediate steps as it sent each packet, rather than after all two hundred fifty packets are sent.
Most known systems have used some variant of the quantum/interval approach, where on each service of some fixed quantum of bytes, the calendar is advanced by a certain interval. Generally, these systems have either used large quantums more than the size of an MTU, or they have had to employ other techniques to deal with quantums that are less than an MTU. Larger quantums avoid complexities in the implementation, but the trade-off is much more burstiness.
Some systems have mitigated the burstiness problem by using a quantum that is smaller than an MTU. However, because packets then can be much larger than one quantum, a division operation (i.e., size/quantum) is required to compute the number of calendar slots to be moved. While this improves smoothness of rate delivery, it does so only at a trade-off in accuracy as the use of smaller quantums to deliver rates exacerbates the variable rate accuracy issue. Also, division is typically a very expensive operation. If a hardware divide capability is not available (as on many embedded software platforms), either the quantum must be restricted to a power of two which results in rate granularity problems, or the division must be done iteratively in which case rate computation does not operate in a fixed time). Moreover, using a hardware implemented divide operation could also introduce issues with “drift”/round-off errors which cause some of the desired rate to be lost. Accordingly, prior systems can offer granularity, but trade off smoothness against accuracy. Further, the need to create quantum/interval pairs to encode rates and the constraints on the creation of those pairs, can make it difficult to configure such systems. In particular, systems which use a power-of-two-only quantum may require iterative procedures to define sets of rates that best meet their individual criteria, and their relationship to each other.