1. Field of the Invention
The present invention relates generally to data cache systems, and more specifically is directed toward a system and method for processing fetch requests that are completed out-of-order.
2. Related Art
Conventional computing systems typically include a memory hierarchy having multiple cache levels. Upper level caches are typically smaller and faster than lower level caches. The size and speed of the upper-level cache allows it to match the clock cycle time of the central processing unit (CPU). Success or failure of an access into the upper-level cache is designated as a hit or a miss, respectively. Simply stated, a hit is a memory access found in the upper level, while a miss means that the memory access is not found in the upper level. Associated with a miss is a miss penalty which is the time to deliver the block to the requesting device (normally the CPU).
One method of reducing the miss penalty is to provide a second level of cache between the upper level cache and main memory. This second-level cache is designed to capture many accesses that would otherwise go to the main memory. In a similar manner to the upper-level cache, the second-level cache also incurs hits, misses and the associated miss penalty. The miss penalty includes the time it takes to retrieve a block from the main memory.
The transfer of data from the main memory to the second-level cache is controlled by a memory controller that is associated with the second-level cache. This memory controller issues fetch requests to the main memory. Assigned to each fetch request is a job number that associates returned data from memory with a previously issued fetch request. A limited number of job numbers (e.g., 8) are typically available for assignment to received fetch requests.
In conventional systems, fetch requests are issued in order based upon the order that they are received. This in-order processing of the fetch requests by the memory controller ensures that the sequence of reads and writes to memory are performed in the order defined by the CPU. In operation, the job numbers previously assigned to fetch requests that have been completed are reassigned to new fetch requests that are received by the CPU. Reassignment of the job numbers is typically performed sequentially (e.g., . . . , 5, 6, 7, 0, 1, 2, . . . ).
A problem with this sequential reassignment occurs when the next job number in the sequence has not yet completed. For example, consider the case where job number 7 has just been assigned, and job numbers 0-6 remain outstanding. The next job number to be assigned is job number 0. Until job number 0 completes, a job number cannot be reassigned to the next fetch request that is received by the memory controller. The memory controller continues to delay reassignment even if another fetch request (e.g., fetch request that is associated with job number 1) has completed. This out-of-order completion scenario can significantly affect the cache miss penalty. Therefore, what is needed is a system and method for maximizing job number reuse to prevent unnecessary delays in the issuance of additional fetch requests.