Most database systems preallocate memory space and manage for buffering and caching data pages from storage. Modern database buffers and file system caches include a pool of page frames which can consume several gigabytes of memory space. This memory space is used to stage data from storage for query processing and to provide fast accesses to data. For best performance, these page frames are typically managed by a buffer replacement algorithm such as least recently used (LRU) scheme in order to keep data most likely to be accessed in the future in the memory. In real environments, numerous processes contend for these page frames, and the buffer replacement algorithm should be designed sophisticatedly. However, the exemplary embodiments observe that database systems often cannot saturate the storage device (i.e., SSD) regardless of client scale. An in-depth performance analysis of database systems reveal that page-reads are often delayed by preceding page-writes when there is high concurrency among reads and writes. This “read blocked by write” problem can negatively impact CPU utilization.