1. Field of Use
The present invention relates to data processing systems and more particularly to cache memory systems.
2. Prior Art
It is well known to provide hierarchal memory organizations in which a large slow speed main memory operates in conjunction with a small high speed buffer storage unit or cache. In such arrangements, the central processing unit (CPU) can access operand data and/or instructions at a rate which more closely approximates the machine. During normal operation, when the CPU provides the address of the information to be accessed, control circuits perform a search of a directory which stores associative addresses for specifying which blocks of information reside in cache (i.e., define hit condition). When it determines that the information resides in cache, the information is accessed and transferred to the CPU. When the requested information is not in cache, the control circuits request the information from main memory and upon its receipt write the information into cache at which time it may be accessed.
Examples of such systems are disclosed in the referenced patent applications of Charles P. Ryan and in U.S. Pat. No. 3,588,829.
It has been recognized that the limiting factor for the rate at which cache accesses take place is the time required to perform a directory search. In general, an entire cache cycle of operation is required to determine whether the requested information is in cache (i.e., make a directory access and compare associative addresses).
In the case of a hit condition indicating the information to be fetched or updated is in cache, further access is required for completing the processor operation of either accessing operand data or writing data into the cache. Since the cache memory data must be processed on a real time basis and instruction accesses must be made from cache, the writing of memory data and instruction accesses normally interfere with such operations. To overcome such interference, prior art arrangements hold up processor operations until the memory data is written into cache or instructions are accessed. This has been found to limit the overall access rate of the CPU resulting in a decrease in CPU performance.
Accordingly, it is a primary object of the present invention to provide a cache arrangement which provides a central processing unit with rapid access to information.
It is a more specific object of the present invention to provide a cache arrangement which eliminates the interference between the different types of operations required to be performed.