In a web-based system with constrained resources, presenting information requested to a user with minimal response time is important; as latency increases, transactions may be subject to abandonment. Sophisticated content delivery mechanisms often rely on real-time submission of content requests that result in database queries for the requested content. As the database of content grows, the time required to process such queries increases. In addition, queries may be sophisticated, involving multiple criteria for determining content to be returned which also increases processing requirements.
Several caching approaches are well known in the art. Known caching algorithms improve performance by keeping a subset of possible query responses ready for immediate return in response to a corresponding query. A variety of mechanisms are employed, such as “recency” and “popularity”, which rely on past query history.
There is, therefore, a need for a caching technique which appreciably reduces response time.