1. Field of the Invention
This invention is related to the field of caches and, more particularly, to overriding hit/miss indications in a cache.
2. Description of the Related Art
Caches are typically used to reduce the average latency of accesses to a main memory system. The main memory system typically includes dynamic random access memory (DRAM) such as double data rate (DDR) synchronous DRAM (SDRAM). Caches typically have one or more orders of magnitude less capacity than the memory system, and also typically employ lower latency memory than the DRAM used in the memory system. Caches store copies of a subset of the data in memory, in units of cache blocks or cache lines. A cache block/line is the smallest unit of allocation/deallocation of storage space in the cache. Typically, caches store recently-accessed cache blocks.
Since a given storage location in the cache may store cache blocks from different memory locations in the memory system, the cache includes tags that identify the memory address of each cache block. When the cache is accessed, an input address is supplied for the memory location being accessed. The input address is compared to tags of cache block storage locations that are eligible to store the cache block identified by the address. Which cache block storage locations may store the cache block is dependent on the cache design, as discussed below. If a match between the input address and a tag is detected, the access is referred to as a hit and the access may be completed in cache. If there is no match between the input address and a tag, the access is referred to as a miss and an access to the memory system is performed to complete the operation.
There are various cache designs that are often used. A direct-mapped design provides one cache block storage location that may be used for a given cache block, based on the address of that cache block. Typically, a portion of the address (referred to as the “index”) is used to select the cache block storage location. Thus, multiple addresses that have an equal index may map to the same cache block storage location. If multiple different addresses mapped to the same cache block storage location are accessed, the cache blocks corresponding to those addresses experience contention for the cache block storage location. In a set associative design, multiple cache block storage locations (collectively referred to as a “set”) are eligible for a given address having a given index. Thus, contention among the addresses having the same index may be eased by the ability to store more than one cache block in the set. In a fully associative design, any cache storage location may be used for a cache block at any address.
In normal operation, one or more tags are read from the cache and compared to the input address to detect hit or miss. Additionally, the hit/miss result may be used to select which cache location outputs data for a read. There are some cases in which it is desirable to override the hit/miss detection via the tag comparison (e.g. for test purposes, to evict a cache block from the cache, etc.). Typically, such overrides are implemented by muxing the output of the tag comparison with the override hit/miss signals, and selecting the override or the tag comparison result by controlling the mux. However, the tag read, comparison, and output selection is often a critical timing path which may be a limiter on the clock frequency at which the cache (or an integrated circuit that includes the cache) may be operated. By inserting the muxes, the critical path is lengthened.