The above-mentioned applications Ser. Nos. 207,097 and 207,152 disclose a cache/disk data processing system having means for decreasing the average time required to access data where that data is stored on disks. A cache memory is provided and data in this memory may be accessed in a much shorter time than if the data were accessed directly from the disks. The cache memory stores segments of data, these segments containing the most recently accessed words or the words most likely to be accessed within a short interval. When a host processor wishes to access a location or locations, it sends a command to a storage control unit which first checks to see if the data from the desired locations is resident in the cache memory. If it is, the data is returned to the host. If the data is not resident in the cache memory then it is staged by segments from a disk, placed in the cache memory, and sent to the host. This may require replacement of some of the segments in the cache memory by the new data from the disks. As taught by applications Ser. Nos. 207,097 and 207,152, this replacement is accomplished by providing a segment descriptor table having an entry associated with each segment resident in the cache memory. Each entry in the segment descriptor table includes a forward and a backward age link address whereby the segments are linked from least recently used to the most recently used. When a segment must be replaced, it is the least recently used segment which is replaced.
In the system described above a segment is relinked at the most recently used position each time it is referenced. During what would otherwise be idle time for the storage control unit, it trickles back to the disks segments in the cache memory which have been written to since they were first loaded in the cache memory from the disks. Segments are trickled back to the disks only if they have been written to and are then trickled back in order from the least recently used to the most recently used.
In the foregoing system a write operation is acknowledged to the host when data from the host is written into the cache memory. Between the time a write is acknowledged to the host and the time the data is actually written to the disk (by replacement or trickling) there is a window of vulnerability to any cache failure such as power loss which destroys the data. To shorten this window of vulnerability it is desirable to write all modified data to the disks as soon as possible. Unfortunately, this tends to significantly increase traffic within the subsystem to the extent that a considerable part of the advantages of a cache memory may be lost. The cache aging technique of least recently used, as employed in the aforementioned applications Ser. Nos. 207,097 and 207,152, minimizes the number of writebacks but still produces an arbitrarily long window of vulnerability for cache segments which are frequently referenced because the segments do not become candidates for trickling and once segments become candidates for trickling the trickle commands generated to control the trickling of the segments to the disks are assigned the lowest priority value. This problem was partially solved by the invention claimed in concurrently filed application Ser. No. 354,558 by separating the replacement age of a segment (i.e. its relative eligibility for replacement with a new segment) from its writeback age (i.e. its relative eligibility for being written back to disk). However, the mere separation of the replacement age of a segment from its writeback age did not completely solve the problem where the cache/disk system was extremely active. When the cache/disk system is extremely active there is insufficient "idle" time during which the written-to segments might be trickled back to the disks. Furthermore, a different problem exists in that higher disk write traffic occurs in the system when segments are trickled back to the disks too soon.
The invention aimed in concurrently filed application Ser. No. 354,559 partially solves these problems by establishing a threshold writeback age such that no written-to segment becomes a candidate for trickling until the age since first write of the oldest written-to segment exceeds the threshold writeback age, and, once the age since first write of the oldest written-to segment exceeds the threshold writeback age, assigning the commands to trickle the segments an execution priority level that increases as the age since first write of the oldest written-to segment in the cache memory increases. However, that arrangement may lead to a filling up of the cache memory with written-to segments if there is a heavy concentration of write commands from the host processor. The present invention solves this problem by assigning an increasingly higher priority level to trickle commands as the percentage of the total number of segments in the cache memory that are written to becomes increasingly larger.