This invention related to a computer system design and particularly to shared pipeline request fairness algorithms.
Cycle time in high end multiprocessor systems continues to decrease as technology advances. However, the number of requests sharing a resource is increasing because today's systems are including higher numbers of active processors and IO requestors. Additionally, more and more logic is being moved onto a single chip. This combination requires new priority mechanisms that take up less physical space, consume less power, simply design and chip wiring and that minimize the number of critical timing paths, while maintaining a sufficiently robust priority mechanism to handle an increased number of requesters.
Traditionally, the least physically demanding scheme has been basic rank priority. In this scheme, all the requestors waiting to use a resource are assigned a rank order, and are only allowed access to the resource if no higher ranking requests are present. While the basic rank priority scheme is efficient from a physical design point of view (it uses fewer latches and less silicon) than more complicated schemes, logically it is not a very fair algorithm. The lower ranked requesters may be continually and indefinitely blocked by a plurality of higher ranked requesters. The Least Recently Used (LRU) priority scheme is a more ‘fair’ algorithm, but it requires many latches and increases the number of critical paths and priority latency. One prior art priority scheme is taught in Shaefer et al U.S. Pat. No. 6,119,188, to which the interested reader is referred should more detailed information be desired.