This invention generally relates to cache memory, and in particular, to concurrent refresh in cache memory.
Embedded dynamic random access memory (EDRAM) requires periodic refresh operations to retain contents of memory cells. In order to reduce the adverse performance effects of this requirement, EDRAM may include a concurrent refresh feature. An EDRAM instance, or macro, may initiate an internal refresh operation during a functional fetch or store operation. The functional access is performed to one portion of the EDRAM macro, while the refresh operation is simultaneously performed to another portion of the EDRAM macro. The EDRAM macro may track the progress of the internally generated refresh activities and determine whether the internal refresh operations are sufficient to meet the refresh needs of the macro. If the rate of concurrent refresh is not sufficient for a given time period, the EDRAM macro may signal that a directed refresh command is required.
In a relatively large cache system, there may be considerable distance and latency separating a cache controller from the cache itself. The latencies involved render typical concurrent refresh signaling designs unusable. For example, a request for the furthest address sliced portion of a large cache for a directed refresh is an urgent request, requiring a break in pipeline access. The multiple cycle latency to and from the furthest banks add directly to the duration of the break in the access pipeline. Additionally, it is possible that the EDRAM macro may require more than one directed refresh command for a given time interval, requiring the cache controller to always create a break in the pipeline sufficient for the maximum of refresh commands. Further, the cache controller must keep access to the pipeline idle while verifying that sufficient refresh commands have been issued. It follows that the larger the cache, the longer the latency, which results in long idle periods for the processing pipeline.