The present application relates generally to an improved data processing apparatus and method and more specifically to an apparatus and method for providing an improved reconfigurable cache.
A cache is used to speed up data transfer and may be either temporary or permanent. Memory caches are in every computer to speed up instruction execution and data retrieval and updating. These temporary caches serve as staging areas, and their contents are constantly changing. A memory cache, or “CPU cache,” is a memory bank that bridges main memory and the central processing unit (CPU). A memory cache is faster than main memory and allows instructions to be executed and data to be read and written at higher speed. Instructions and data are transferred from main memory to the cache in fixed blocks, known as cache “lines.”
Non-uniform cache architecture (NUCA) is an emerging cache architecture for large cache design. In a NUCA, a single cache contains multiple banks of differing distance and, thus, differing wire delay and latency. NUCA improves performance of memory systems.
A large NUCA has many banks. Therefore, frequent data movement between banks may hurt performance and increase power consumption. Conventional reconfigurable caches mainly target for power reduction by using simpler (direct mapped) or partial cache. For example, an adaptive cache may configure between direct mapped cache and set-associative cache or may configure between two-way set associativity and four-way set associativity. Another form of adaptive cache uses one-way access to turn on only one way during a write to save power. While conventional reconfigurable and adaptive cache architectures and techniques are generally target power reduction, using conventional reconfigurable caches does not improve access latency.