1. Technical Field
The present invention relates generally to an improved data processing apparatus and method. More specifically, the present invention is directed to an apparatus and method for providing priority control with regard to resource allocation.
2. Description of Related Art
Bandwidth management techniques have been employed in networking for several years to prevent a small number of users from consuming the full bandwidth of a network, thereby starving other users. The most recent bandwidth management technology handles 10 Gbits per second (1.25 GB/sec), involving 27 million packets per second or less. The techniques employed for managing network bandwidth and priority often require an entire Application Specific Integrated Circuit (ASIC) chip.
Traffic rates inside computer systems are typically higher than network traffic rates. For example, some traffic rates inside computer systems may reach 25 GB per second and may involve 200 million 128 byte packets. Because the bandwidth is managed for multiple resources and this bandwidth is higher than network bandwidths, managing bandwidth inside a computer system is almost an order of magnitude more difficult than network bandwidth management. In addition, the available circuits that can be applied to the management of bandwidth within a computer, such as on a system-on-a-chip, is a small fraction of the circuits on the chip. As a result, the techniques used for network bandwidth management cannot be utilized for management of bandwidth within a computer system. Thus, simpler schemes requiring fewer circuits are needed for managing bandwidth in a computer system.
Often a simple round-robin priority scheme is used to manage bandwidth within a computer system because it promotes fairness between all of the units in the computer system that contend for the bandwidth. Alternatively, a fixed priority scheme may be used for some devices that require low latency. Low latency for some devices is desired due to limited buffering in the device and the negative performance implications of over-running the buffer. Another need for low latency may arise with devices whose performance limits the performance of the entire computer system. To satisfy this need for low latency in some systems, priority is simply assigned to a specific requester, such as an I/O bridge, such that all I/O accesses to the system memory bus have higher priority than any processor attached to the system bus.
In a real-time computer system, a simple round-robin priority scheme is inadequate to produce guaranteed bandwidth results that vary per device and dynamically change. Moreover, using a fixed priority mechanism for some devices, coupled with a round-robin scheme for fairness among other devices, is inadequate for a real-time system with multiple software partitions since it can cause bandwidth starvation of lower priority devices. Granting higher priority to some devices isn't feasible when control of that device belongs to untrusted applications at times, intermixed with periods of time when a trusted operating system or hypervisor controls the device.