This invention relates to a cache memory organization for use with a digital computer system. In particular, this invention relates to a cache memory organization having lock up free instruction fetch/prefetch features. Also, this invention relates to a cache memory organization that overlaps execution unit requests of operands for processor instructions requiring more than one operand. This invention is directed to the problem caused when a cache memory receives an address request for data which is not in the cache. If there is a miss request to a cache memory, there is the possibility of a time delay penalty on subsequent address references to the cache memory, even if the subsequent data is present in the cache memory. The penalty is caused by the necessity of waiting until the missed requested data is received from central memory into the cache memory organization. Some prior art even requires the missed data to be received from central memory and updated into the cache prior to availability for output.
Known to applicant as prior art is U.S. Pat. No. 3,949,379 entitled, "Pipeline Data Processing Apparatus With High-speed Slave Store." This patent only touches on the idea of a cache memory, or as referred to in the patent a slave store, with respect to the lockout prevention feature after a missed address where data is not present in the slave store. Most of the lockout prevention in the subject patent is accomplished by providing three separate slave stores. The remaining lockout prevention feature shown in the patent is accomplished by allowing a block update for a previous missed address parallel with subsequent accesses through the use of a write memory register. This latter lockout prevention, however, is only accomplished in the most trival situations where, for example, only one block of data may be updated at a time, or where there are no accesses to a block of data being updated or where there can only be one write instruction to each word of the block on each total block update. This is described in columns 5 and 6 of the patent. In summary, the patent shows the possibility and benefit of lockout prevention for a slave store or cache memory but only proposes such use in the simpliest of cases. Note that there is a high probability of accessing another word in the same block of data, the principle behind the utility of a cache memory called lookaside. The present invention described herein extends lockout prevention to a point where a single cache memory store can effectively be used.
Another patent known to applicant is U.S. Pat. No. 3,896,419 entitled, "Cache Memory Store in a Processor of a Data Processing System." This patent shows the possibility of lockout prevention for a cache memory but only prevents lockout for storing operations as described in column 2. The cache memory organization according to that invention also allows autonomous nonoverlapped blocked loadings as described in column 2, but there is a lockout situation on a miss or no match situation as described in column 4 of that patent. The present invention, described herein not only prevents lockout on up to a predetermined number of miss situations but also allows for matching on data in transit from a backing store or main memory with block loading of the cache buffer totally autonomous and completely overlapped. Note that in the invention described in the patent, only the loading of the cache buffer after the data has been received from main memory is overlapped.
Another patent known to applicant in the prior art is U.S. Pat. No. 3,928,857 entitled, "Instruction Fetch Apparatus With Combined Look-ahead and Look-behind capability". This patent also describes the problem of instruction unit lockout due to memory contention. The solution described is to provide a separate instruction buffer to move the memory contention situation from between the operand fetch and the instruction fetch portions of the cycle to the operand fetch and instruction prefetch portions of the memory cycle. If the instruction buffer shown in this patent were to be sufficiently large with operand fetch features, this contention problem could be essentially eliminated. Another way to solve the problem shown in the patent is to provide as described in the invention herein is a cache buffer memory for both operands and instructions with a mechanism to prevent lockout in a miss situation. Further, the invention of this patent does not solve the problem of cache memory lockout due to a miss on the first access of a many operand access instruction. This is the situation where all memory accesses are from the execution unit. Also, in this proposed multiple store situation, there is a significant problem when the data in more than one of the stores is possibly the same piece of data and updating becomes extremely cumbersome. A single cache memory buffer organization simplifies this situation.
There are many other patents relating to cache memories such as U.S. Pat. Nos. 4,095,269; 4,195,341; 3,292,153; 4,056,845; and 4,047,157 but none of these patents is at all related to the prevention of the cache memory lockout situation on a miss.