The present invention relates generally to the field of computer memory management, and more particularly to optimizing usage of cache memory.
As of Apr. 1, 2016, the Wikipedia entry for the word “cache” states as follows: “In computing, a cache . . . is a component that stores data so future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation, or the duplicate of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than re-computing a result or reading from a slower data store; thus, the more requests can be served from the cache, the faster the system performs . . . . [C]aches have proven themselves in many areas of computing because access patterns in typical computer applications exhibit the locality of reference. Moreover, access patterns exhibit temporal locality if data is requested again that has been recently requested already, while spatial locality refers to requests for data physically stored close to data that has been already requested.”
One conventional type of cache is a “dynamic entry cache.” In a dynamic entry caching system, cached items are dynamically discarded from the cache after being identified as idle (no longer being accessed or updated).
A conventional client-server application typically uses a server sub-system to service “client requests.” In a typical example of a client request: (i) a user clicks on a button or link in a web browser; (ii) the browser formulates a client request including a web page URL; (iii) the browser sends the client request to a web server; (iv) the web server prepares a response, including the contents of a web page at the URL, and sends the response to the browser; (v) the browser receives the response and displays the web page included in the response. Typically, in servicing the client request, three different portions of hardware/software (called “layers”) of the server sub-system cooperate as follows: (i) when an end user sends a request using a user interface, a request pipeline is initiated from a “view” layer of the client-server application; (ii) the request pipeline is received by a “controller” layer; and (iii) the controller layer interacts with a respective data model (a “model” layer) to prepare the response and send the response back to the view layer and in turn, to the end user. The three layers (“view”, “controller” and “model”) maintain respective sub-caching layers within a single dynamic entry cache, which is dedicated to (or “owned” by) the client-server application.