Memory management is a fundamental issues of modern computer systems. Typically, a computer system includes a hierarchy of memory that ranges from a small, fast cache of main memory that is placed in front of a larger, but slower, auxiliary memory. The cache is generally implemented using a physical memory, such as Random Access Memory (RAM), while the auxiliary memory is implemented using a storage device, such as a disk drive or hard disk. Both memories are usually managed in uniformly sized units known as pages. Because of their impact on performance, caching algorithms that manage the contents of the main memory aid in the efficiency of a computer system, server, storage system, and operating system.
Page replacement is a technique for maximizing the utility of a system's available memory by caching the “right data” from much slower secondary storage. The “right data” is called the working set, which is data that is currently in use by applications and is likely be needed in the near future. Page replacement algorithms try to detect what is part of the working set by observing memory access patterns. When available page frames are fully occupied with cached pages and an application needs to access an uncached page of data from the disk, the page replacement policy in the operating system should determine the optimal candidate page to be evicted from memory in order to reuse its page frame for the new page.