In computing, a cache is a memory component that stores data recently used so as to make faster responses for future requests of the same data. Typically, the data stored in a cache is duplicated elsewhere in the system and is the result of an earlier computation or retrieval of the data. A cache “hit” relates to data found in a cache, while a cache “miss” relates to an attempt to find data in cache, but the data is not present. When a cache hit occurs, the data can be retrieved from cache quicker than other data stores, such as a disk drive. Thus, it is well understood, that caches can speed up system accesses to data.
Overflow data is typically a separate issue from cache data and generally relates to hash collisions. Data can be stored in memory at addresses determined by hash keys. However, since practical hashing algorithms are not perfect, multiple input data sets can generate the same hash key. This means collisions can occur between selected memory locations. A bucket of multiple data entries can be placed at each hash key address. However, these buckets have a limited size and may completely fill. Thus, sometimes a hash key can point to a memory area that does not have capacity to store new data. As a result, the data is stored in a secondary memory area as overflow data.
Cache data and overflow data are stored in separate memories or a single partitioned memory. Overflow conditions can be rare, and the memory area allocated to overflows can go unused, resulting in inefficient use of memory space. Additionally, space allocated to cache data is generally static and fixed. Even if additional cache space can be more efficient, the system merely uses what is available for cache and evicts old entries to make room for new entries.