In many data processing systems, one or more resources may be shared among the processors of that system. For example multiple instruction processors and/or I/O processors may share access to a common main memory. Other resources such as cache memories may also be shared in this manner. In these types of systems, some mechanism must be provided to throttle the issuance of the requests since, generally, only a limited number of requests can be queued to any given shared resource at once. Second, the system must implement some type of fairness algorithm to ensure that no one requester is denied access to the resource for an extended period of time. Ideally, this system further limits the number of requests queued to a given resource so that queuing latency does not exceed acceptable thresholds.
Several mechanisms are available for throttling requests. According to one mechanism, when a requester such as an instruction processor issues a request to a shared resource such as a main memory, that requester will not issue another request until some type of an acknowledgement signal is received from the main memory. However, if the time required for a request to travel to the main memory, and the corresponding acknowledgement to be returned to the processor, is relatively large, the issuance of requests by the processor may be unduly restricted. Thus, this type of throttling mechanism is generally only used in those instances where the request and associated response times are relatively small.
According to another mechanism for controlling the rate of requests, a flow control signal may be utilized to control the issuance of requests. This signal informs the requester to stop issuing new requests. When utilizing this mechanism, the shared resource must issue the flow control signal early enough to prevent queue overflow, since requests will continue to be issued while the flow control signal is enroute to the requester. Thus, in this configuration, limitations on request issuances may be overly restrictive.
Another technique for managing the issuance of requests to a shared resource involves the use of a debit/credit mechanism. In this type of system, each requester is granted a predetermined number of credits. When a requester issues a request, this number is debited. Conversely, when the shared resource has finished processing a request, the resource grants another credit to the requester that issued the completed request. A requester is allowed to continue issuing requests so long as it has one or more credits remaining. This type of solution is more logic intensive than the foregoing mechanisms.
Retry mechanisms provide yet another type of system for throttling requests. Retry mechanisms generally involve temporarily removing requests from queues to allow other requests to advance within the system. Selection of the requests that are allowed to advance is based on predetermined criteria that establishes some type of priority among requesters or types of requests.
Retry mechanisms generally provide good performance for high throughput multi-processor systems. However, such mechanisms may lead to the unwanted occurrence of “live-lock” situations. A live-lock situation involves the improbable but possible scenario wherein requests from one requester are continually retried and are therefore prevented from making progress. If one of these requests is associated with the return of some data or access rights that are needed to satisfy pending requests from other requesters, this unlikely scenario may cause the entire system to experience significant performance degradation.
What is needed, therefore, is an improved system and method for managing requests to a shared resource that addresses the foregoing problems.