The present invention relates, in general, to the field of computers and computer memory systems. More particularly, the present invention relates to a system and method for determining relative cache performance in a computer system utilizing a computer mass storage device such as a rotating, randomly accessible mass storage medium, comprising a "hard", "fixed", "rigid" or Winchester disk drive, as a data cache for a remote network data source or another relatively slower computer mass storage device such as a compact disk read-only memory ("CDROM").
As a general rule, central processors can process data significantly more quickly than it can be moved in and/or out of primary storage, or the data "source". Consequently, the performance of a computer system is substantially limited by the speed of its primary data storage devices, networks and subsystems. In order to ameliorate this perceived "bottleneck", central processor interaction with the data source may be minimized by storing frequently referenced data in a relatively small, high-speed semiconductor memory cache located between the processor and the primary storage devices. However, such semiconductor memory, whether dynamic or static random access memory ("DRAM" or "SRAM"), is relatively costly per megabyte of storage compared to disk drives (on the order of eighty times more expensive) and, as such, the cache must generally be of comparatively limited capacity. Small caches function relatively well for repeatedly small data burst loads but poorly for sustained loads.
When utilizing caching techniques, should a program issue a "read" command for an instruction or user data, the processor first looks in its cache. Should the requested data reside in the cache, (a cache "hit") there is no need to attempt to read from the data source which may be located on a network, CDROM or any other relatively slow access time device or subsystem. However, if the requested data is not available in the cache, (a cache "miss") the processor must then access the data source in order to retrieve the data sought.
Data which must be retrieved from the data source may then be written to the cache where it may later be accessed by the processor directly without a request directed to the original data source. Alternatively, any data which may be subsequently modified by the processor may also be written to the cache. In any event, inasmuch as a semiconductor memory or other cache may have relatively limited storage capacity, a data replacement algorithm of some sort is generally used to determine what existing data should be overwritten in the cache when additional data is read from primary storage.
In light of the foregoing, the current trend is toward ever larger semiconductor cache sizes or the utilization of local caching to an associated hard disk drive. Heretofore however, the computer user has had no way of readily determining the performance advantages of his local data cache, particularly in a way that accurately factors in the additional "overhead" of initially writing data to the cache from the source network, CDROM or relatively slower access time computer mass storage device.