An ever-increasing reliance on information and computing systems that produce, process, distribute, and maintain such information in its various forms continues to put great demands on techniques for providing data storage and access to that data storage. Today's data centers and cloud computing environments require increased input/output (I/O) performance to support large-scale applications such as databases, web servers, e-commerce applications, file servers, and electronic mail. These applications typically accommodate a large number of end-users. To meet service requirements of these end-users, data center operators deploy servers with high I/O throughput. The larger the number of end-users on the servers translates to an increase in the number of I/O operations required from these servers. As a consequence, servers are often maintained at low storage capacity utilization in order to meet the required number of I/Os, which is an inefficient use of resources.
Solid state drives (SSD) are storage devices capable of high I/O performance. An SSD uses flash components to store data and, unlike a hard disk drive (HDD), has no moving parts and no rotating media. SSDs offer a higher read bandwidth, higher I/Os per second, better mechanical reliability than HDDs, and higher resistance to shock and vibrations. But SSDs have more limited capacity than do HDDs, and therefore generally cannot be used as a replacement for HDDs in a data center.
SSDs and other similar memory devices can, however, be used to improve I/O performance of data center servers by functioning as a caching layer between HDDs, storage volumes including HDDs, or other stored data sources, and server main memory. Stored data can be copied to an associated cache (e.g., an SSD) upon access of that data in order to improve the speed of subsequent access to that data. But since the cache will generally not have the same storage capacity as the associated HDD or storage volume, the cache memory will ultimately cease to have sufficient free space to copy newly accessed data. A mechanism for efficiently identifying areas of memory in the cache to make available is therefore desirable.