Eventually-consistent data stores are common components of distributed systems. As is known, an “eventually-consistent data store” is one that allows certain representations of data to fall behind most recent updates. For example, a distributed data store may maintain multiple versions of a particular data element in different locations. Rather than requiring all versions of that data element to be identical at all times, eventual consistency allows certain versions to lag behind. For example, it may be perfectly acceptable for different versions of a data element to be out of sync with one another for several minutes. In a busy, distributed system, eventual consistency helps to avoid chatter among different nodes and high latency in response times, as the nodes merely require their data to be consistent eventually, rather than immediately.
Eventually-consistent data stores commonly employ read caches to improve performance. For example, a distributed, eventually-consistent data store may employ a centralized read cache, which provides requestors with quick access to frequently-used data elements. To promote fast responses, the read cache may implement semiconductor memory or fast, nonvolatile memory. The read cache connects to slower, persistent storage, which maintains more permanent copies of data elements and typically has a much larger storage capacity than the read cache.
Some read caches employ a timeout policy for data eviction. For example, whenever a read cache loads a data element from persistent storage, the read cache assigns the data element an initial time to live (TTL), which immediately begins counting down to zero. The data element expires when the TTL reaches zero. If a client later requests the expired data element, the read cache signals a cache miss and reloads the requested data element from persistent storage. The cache again sets a timer for the requested element, which again starts counting down to zero. This arrangement ensures that data can circulate into and out of the cache, and that the data in the cache does not fall behind the corresponding data in persistent storage by more than the initial TTL.