1. Field of the Invention
The present invention relates to a method and apparatus for preventing "livelock" between two masters that read information from a memory device containing cache.
2. Description of Related Art
Computers typically use dynamic random access memory (DRAM) devices to provide memory locations for the central processing unit (CPU). Increases in CPU size and speed have created a similar expansion in DRAM design. Unfortunately, increasing the size of a DRAM chip decreases the speed of memory access. Recently there has been incorporated into DRAM memory devices a cache line that provides a readily accessible block of data. Such a device is presently produced by Rambus, Inc. of Mountain View, Calif. The Rambus design incorporates a pair of caches that each store a row of data from a corresponding main memory block.
The CPU normally sends a read request that includes addresses. The DRAMs contain a control circuit that determines whether the cache has the requested data. If the cache has the data requested, the DRAM provides an acknowledge (ACK) signal to the CPU and the data is provided to the processor. If the cache does not have the data, the DRAM sends a no acknowledge (NACK) signal to the CPU and loads the requested data from main memory into cache. The processor then resubmits another read request package containing the same addresses. The cache will now have the requested data and will transfer the same to the CPU.
Attaching two masters (CPU's) to a DRAM with cache presents particular problems. For example, when the first CPU requests addresses that are not in the DRAM cache, the DRAM will NACK the request (generate a NACK signal) and then fetch the requested data from the main memory block and load the same into cache. The requested data is now in cache waiting for the first CPU to resubmit the same address request. If the second CPU provides a read request to the DRAM before the first CPU resubmits the first request, the DRAM will look to see if the cache contains the data requested by the second CPU. If the requested addresses are not within cache, the DRAM will generate a NACK signal and then proceed to fetch the new data from main memory and load the same into cache, replacing the data requested by the first CPU with the data requested by the second CPU. Now, when the first CPU again resubmits the same read request, the cache will not contain the data requested, it contains the data requested by the second CPU. The DRAM will generate a NACK signal and fetch the requested data from main memory into cache, again replacing the data requested by the second CPU with the data requested by the first CPU. When the second CPU resubmits its read request package, the process is repeated. The DRAM and CPU's are thus caught in an endless loop of sending read request and fetching data into cache. Such a loop is commonly referred to as "livelock".
A similar problem exists if a CPU request is submitted during a refresh cycle of a DRAM with cache. When the DRAM is in a refresh cycle, the refresh controller stores the contents of the memory cells that are to be refreshed into cache. After a row of data is refreshed, the contents of the cache are reloaded back into main memory. If the CPU provides a request during the refresh cycle, the DRAM will cause the CPU request data to be transferred into cache, replacing the refresh data already within the cache. When the refresh controller places the data from cache back into the main memory, the refreshed memory cells will contain invalid data.
One possible solution is the incorporation of a timer that is connected to the cache and main memory block. Once a cache fetch from main memory is initiated, the timer prevents further cache fetches until a predetermined time has elapsed. Thus in the example above, when the first CPU provides a read request that is subsequently fetched from main memory into cache, the submission of a read request from the second CPU will not cause a cache fetch of the new data request, unless a predetermined time has elapsed since the first request. The timer is typically set to allow the first master to resubmit the read request before time has expired. The use of a timer is somewhat limiting, in that the speed of the CPU must be known and within the limits of the timer. The timer concept is also less susceptible to modifications in the CPU and bus protocol.