One of the key computational resources for a computer application is memory space. Typically, multiple applications may run concurrently competing for accessing available physical memory via a memory manager in a system. When the size of memory space to support the running applications exceeds the size limit of the physical memory of the system, the memory manager may compensate the deficiency with operations such as memory swaps to keep the applications running. However, such operations may be costly to tax the performance of the whole system because of associated disk IO activities.
Usually, a memory manager may monitor memory usage in a system to ensure availability of a required capacity of free physical memory to alleviate the penalty of costly memory management operations. In some systems, if memory usage reaches a critical level, the memory manager may take memory management actions to increase the size of free memory. For example, the memory manager may look for memory pages that haven't been used for a period of time (e.g. based on the least recently used policy implemented in vm_pageout_scan( ) routine of a UNIX operating system) and page them out to disk (e.g. via swap operations). Alternatively, the memory manager may free up a portion of memory or memory pages belonging to an application or a process which occupies the largest amount of memory space among a list of currently active applications or processes. However, such memory management operations may be agnostic as to how the memory being paged out is used by the applications. As a result, a critical memory page for one application may be paged out while another low priority memory page for another application may be retained.
Therefore, existing memory management approaches are not capable of leveraging application specific memory management operations to utilize limited memory capacity in a distributive, effective and intelligent manner.