The present invention relates to one or more page(s) of data and the relationship between an in-memory pool and a physical storage device with respect to the page(s) of data. More specifically, the invention relates to asynchronously swapping compressed page(s) from the in-memory pool to the physical device.
Caching is a common technique used to speed memory processes. Cache memory is smaller, faster and typically more expensive than physical storage. When a processing unit requests data that resides in main memory, the processing system transmits the requested data to the processor, and also may store the data in a cache memory. When the processor issues a subsequent request for the same data, the processing system first checks cache memory. If requested data resides in the cache, the system gets a cache “hit” and delivers the data to the processor from the cache. If the data is not resident in the cache, a cache “miss” occurs, and the system retrieves the data from main memory. Frequently utilized data thereby is retrieved more rapidly than less frequently requested data, and overall data access latency, i.e. time between a request for data and delivery of the data, is reduced.
It is recognized that cache memory has limited capacity. One solution to optimize the pages is to compress the pages in the cache in order to retain more pages in the limited capacity of the cache. However, this merely defers swapping pages from cache to physical storage.