1. Field of the Invention
The present invention is directed generally to a system and method for optimizing the utilization of computer hardware and more specifically to a system and method for optimizing the utilization of a cache memory.
2. Description of the Background of the Invention
Large mainframe computer systems generally include, along with various other pieces of equipment, a central processing unit, a number of direct access storage devices and one or more input/output controllers. Each input/output controller processes data requests from the central processing unit for those direct access storage devices which are in communication with that controller. These input/output controllers may include a cache memory which is typically smaller in size than 256 megabytes.
When an input/output controller which includes a cache memory processes an initial data request from the central processing unit for a certain track of data, the requested track of data is loaded from one of the direct access storage devices into the cache memory. Subsequent requests for that track of data, while it is stored in the cache memory, do not require the input/output controller to access the track of data on the direct access storage device. Because the central processing unit's access time for data residing in the cache memory is much less than the access time for data residing on one of the direct access storage devices, the performance of the computer system is greatly improved when data requested by the central processing unit is located in the cache memory.
Because the size of the cache memory is not, however, unlimited, all of the tracks of data requested by the central processing unit cannot reside in the cache memory. Thus, the input/output controller generally utilizes an algorithm wherein data is loaded into the cache memory until it is full and then the track of data which has been least recently accessed by the central processing unit is replaced by a track of data requested by the central processing unit and not found in the cache memory.
This algorithm does not, however, result in the most efficient use of the cache memory. For example, tracks of data that are accessed less frequently than the average time that a track of data would reside in the cache memory from the last time it was accessed by the central processing unit until it is replaced by another track of data are not efficient users of cache memory. This is because there is a certain time involved in loading a track of data from the direct access storage device into the cache memory. It is only efficient to load this track of data into cache memory if it will still be in the cache memory when the central processing unit requests it the next time. Thus, cache memory efficiency could be increased if tracks of data which are not efficient cache users were inhibited from being loaded into cache memory thereby displacing potentially more efficient users of cache memory. Unfortunately, however, there is no direct method for calculating the length of time a track of data resides in cache memory after its last access thereby making it difficult to determine which tracks of data are not efficient cache memory users. Thus, the need exits for a system for optimizing the utilization of cache memory by determining which tracks of data are not efficient cache memory users and inhibiting those tracks from being loaded into the cache memory.