The continuing evolution in integrated circuit technology has resulted in the production of digital data processors with increasingly fast cycle times. Thus, a modern day processor is capable of performing 10 to 100 times more operations per second than was achievable in the not too distant past. Also, to further enhance processing capability, it has become commonplace to include multiple central processors within a computer system along with high speed peripheral and specialized front end processors to enable even higher data throughputs.
Even though there have been many concomitant developments in the art of data storage, oftentimes the data transfer rates of the memory systems have been unable to keep pace with the enhanced processing capabilities of the host of system processors. The result is that oftentimes the entire system becomes memory bound and inefficient because data is supplied to the processors at an insufficient rate.
Various memory structures have been adopted to resolve this problem. One of these structures is a cache memory comprising a limited-size memory having a cycle time much faster than the main or system memory of the processing system. The data stored in the cache memory comprises a constantly changing subset of the information stored in the system memory. The time penalties resulting from the continual access of the slower system memory and the relatively long data transfer path from the system memory to the processors can be avoided if the data being requested by the system processors is already stored in the much faster cache memory. It is, however, necessary to select the subset of the data to be stored in the cache memory carefully in order to minimize the number of system memory accesses that need to be performed. This can be accomplished by storing in the cache memory the information most recently requested by the system processors.
Even with the use of the prior art cache memories, the data processing systems often remain memory bound and underutilized. As a further means to correct this problem, data processing systems have been proposed wherein two words of data are simultaneously transferred from the system memory to the system processors or other system elements in response to a single memory request. This serves as a means of reducing the number of memory requests that must be issued. Such a double word transfer processing system is described in each of the following applications assigned to the assignee of the present invention:
__________________________________________________________________________ LOCAL BUS INTERFACE FOR Arthur Peters et al. CONTROLLING INFORMATION Serial No. 140,662 filed 4/15/80 TRANSFERS BETWEEN UNITS IN A CENTRAL SUBSYSTEM SELF-EVALUATION SYSTEM Richard P. Brown et al. FOR DETERMINING THE Serial No. 140,621 filed 4/15/80 OPERATIONAL INTEGRITY OF Now U.S. Pat. No. 4322846. A DATA PROCESSING SYSTEM BUFFER SYSTEM FOR William E. Woods et al. SUPPLYING PROCEDURE WORDS Serial No. 140,630 filed 4/15/80 TO A CENTRAL PROCESSOR SYSTEM STACK MECHANISM WITH THE Phillip E. Stanley et al. ABILITY TO DYNAMICALLY Serial No. 140,624 filed 4/15/80 ALTER THE SIZE OF A STACK IN A DATA PROCESSING SYSTEM INTERFACE FOR CONTROLLING George J. Barlow et al. INFORMATION TRANSFERS Serial No. 140,623 filed 4/15/80 BETWEEN MAIN DATA PROCESSING SYSTEM UNITS AND A CENTRAL SUBSYSTEM __________________________________________________________________________
The use of double word transfers between the system processors and the system memory has resulted in an incompatibility with the cache memories heretofore in existence. If a processor has requested two words from the system memory and the words were also present in the cache memory, the cache memory was not able to respond in the most efficient manner, i.e., by simultaneously transferring both of the requested data words.
In the prior art cache memories, each double word memory request resulted in two cache memory reads or writes. This caused an unnecessary duplication of memory cycles and lessened the efficiency of the cache memory.
Thus, there has been a need to provide a cache memory compatible with the use of double wide data transfers and capable of reading, writing, and transferring two data words in response to a single memory request.