A typical computer system will use as much memory as can be afforded, and will benefit from having fast access to whatever memory is available. Because a large amount of memory is expensive, as is memory that is particularly fast, a memory hierarchy of different speed and size memories has become the common method of handling data storage in modern computer systems.
The fastest memory is generally used in cache, which is a place where certain data predicted likely to be used again soon is stored. Cache typically doesn't store unique data, but stores a more quickly accessible copy of data stored in other, slower memory. Because cache is the fastest memory in most systems, it is also the most expensive and is therefore relatively limited in size. More sophisticated computer systems have multiple levels of cache, with faster levels having smaller storage capacity than slower, larger levels.
Main memory itself is often not large enough to store all the data that even a typical personal computer will use when executing multiple or large programs. When the computer system needs to store more data than will fit in the physical memory in the computer system, the excess data is stored to disk. When a data element stored on disk is needed, a memory page fault occurs. The memory page containing the needed data is then loaded from hard disk storage into memory after another page is written to disk to make room for the newly loaded memory page.
Organizing memory into pages is also useful in that it allows the computer system to address memory by using virtual addresses, with special components such as a translation lookaside buffer (TLB) able to map virtual addresses to particular pages, whether stored in memory or on disk. This allows the computer system to address more memory than is physically available, using virtual addresses that are translated into physical addresses and specific pages by the memory management system.
Further, each separate process running on modern computer systems also typically has its own address space. Because it would be too expensive to allocate a full address space worth of physical memory to each process, virtual memory is used to divide the physical memory into blocks such as pages, and to coordinate sharing of the actual physical memory. Protection schemes ensure that each process accesses only that memory that is allocated to the process, and generally are implemented in hardware and software along with other page management algorithms.
The mapping of virtual memory into physical memory is stored in a data structure, along with other information such as protection data and use data. This data structure is often part of the translation lookaside buffer or TLB, which is itself of limited size due to the need for rapid access. Increasing the size of the TLB would enable more mapping data to be stored, but at additional size, power, and monetary costs. A more practical solution is to enable mapping more than one page at a time with a single mapping entry where consecutive pages of virtual memory are mapped to consecutive pages of physical memory.
It is desired to more efficiently manage virtual memory mapping.