Many computer systems in use today use some form of virtual memory scheme in order to address larger sections of memory than are available as physical memory (sometimes termed "real" memory) within the computer itself. Such a virtual memory scheme may allow a central processing unit (CPU) to address more memory than what may be available as random access memory located within the computer. This scheme allows data and instructions to be relocated in an organized fashion. In addition to providing solutions for problems of relocating data and instructions in memory, a virtual memory scheme also allows the illusion of a large, fast memory to be provided at lower hardware cost. The purpose of a virtual memory scheme is to allow a much greater flexibility in the physical placement of a particular datum within a possibly multi-level memory hierarchy. A multi-level memory hierarchy is an ordered collection of physical memory systems, each one typically larger and slower (and hence cheaper per byte) than the proceeding one. A virtual memory scheme typically involves the use of pages, in which a page from virtual memory may be brought into physical memory and then rewritten to a hard disk or the like when finished. Such an activity is also called page swapping.
However, in order to perform input/output (I/O) operations most virtual memory subsystems require page by page locking of the region undergoing I/O. This locking and unlocking for a page may occur each time an I/O operation begins and ends. This process can be quite expensive. For an operating system that is highly parallel and performs lots of I/O, it can be extremely costly to lock and release a page each time an I/O operation either requests a page, or is finished with a page, respectively. For I/O operations, many prior art virtual memory schemes acquire and release a lock on virtual memory pages each time an I/O operation is initiated and then completed. This prior art technique results in increased CPU overhead because creating and removing these I/O locks is very expensive. For example, in certain virtual memory subsystems, roughly 75% of CPU overhead for certain I/O operations can be attributed to this page by page locking.
Thus, the swapping in and out of pages, and the corresponding creation and release of leeks on a page by page basis can result in tremendous overhead. To reduce this overhead, it is desirable to keep a page in physical memory and prevent unmapping from occurring unless necessary. In other words, in order to reduce CPU overhead costs, it would be desirable to force a continuous assignment of a physical page to a logical page in virtual memory until the memory space is unneeded or otherwise required. It would also be desirable to have a locking technique for use with virtual memory subsystems that would retain the locks on a region of virtual memory between I/O operations, and would allow processes to locate and utilize cached representations of locks efficiently.