1. Technical Field
This invention relates to a data processing system having multiple queues for managing requests. Specifically, this invention relates to a method and means for dynamically managing queue prioritization in a data processing system having a storage subsystem.
2. Description of the Background Art
In a data processing system (system) 100 having a host system (CPU) and a storage subsystem where the storage subsystem comprises a plurality of storage devices (devices) and a storage controller, there is a crucial list for listing the current status of all outstanding requests for access to any given storage device. (FIG. 1). This list is generally referred to as a job queue, task queue, ready list, work list or simply queue. The requests (also referred to as "jobs") in each queue are usually serviced on a first-in, first-out (FIFO) basis. Moreover, to provide the requests for access to a storage device with different priorities, several queues having different priorities are usually maintained at the storage subsystem for each device. For example, in order to process access requests to a storage device, the storage subsystem may maintain four queues for each device where by design Q.sub.0 is a queue of the lowest priority, Q.sub.1 is a queue of a low priority, Q.sub.2 is a queue of medium priority and Q.sub.3 is a queue of the highest priority. The four queues may be maintained either at the storage controller or at each device.
In general, the storage controller polls the queues for processing requests. If there are pending requests in Q.sub.3, the controller processes the next request from Q.sub.3 since that is the queue with the highest priority. If there are no pending requests in Q.sub.3, the storage controller then processes the next request from Q.sub.2 since Q.sub.2 is the next highest priority queue. However, if Q.sub.3 receives a request for access to a device before the request from Q.sub.2 has been completed, the processing of the request from Q.sub.2 is generally interrupted, the unfinished request is stored back in Q.sub.2, and the request from Q.sub.3 is then processed.
In a similar manner, if there are no pending requests in Q.sub.3 and Q.sub.2, the storage controller then processes the next request from Q.sub.1 since Q.sub.1 would be the next highest priority queue. However, if either Q.sub.3 or Q.sub.2 receives a request for access to a device before the request from Q.sub.1 has been completed, the processing of the request from Q.sub.1 is generally interrupted, the unfinished request is stored back in Q.sub.1, and the request from Q.sub.3 or Q.sub.2 is then processed.
Using this method, the requests with high priorities are processed very quickly, but requests from lower priority queues can get "stuck" for a long time because requests from higher priority queues are continuously interrupting the processing of the requests from lower priority queues. The on going interruptions of the processing of the requests from lower priority queues result in substantial waste of system resources.
In general, in order to deal with this serious problem, a data processing system has to have means for detecting stuck requests and removing them from lower priority queues to higher priority queues so they can be processed. This approach, which is the prevalent method of dealing with stuck requests at the present time requires a complicated controller design, queue implementations and system coding to be able to detect stuck requests and moving them from lower priority queues to higher priority queues in order to execute them. Therefore, managing stuck requests and minimizing the number of requests interrupts in any data processing system having multiple queues is of paramount importance.
Therefore, there is a need for an invention that allows processing stuck requests as efficiently as possible without depleting valuable system resources in the process of moving them from the lower priority queues to the higher priority queues in order to process them.