1. Field of the Invention
The present invention relates to computer memory. More specifically, the present invention relates to a method and an apparatus for managing surplus memory in a multitasking system.
2. Related Art
Modern computing devices must be able to manage computational resources, such as central processing unit (CPU) time, memory, or the number of threads allowed to a task, in order to guarantee that certain amounts of resources are available for tasks. Controlling and managing heap memory is one of the hardest, if not the hardest, problem in this area, mostly because of the difficulties associated with revoking or reclaiming memory from an uncooperative task. This is much harder than, for example, revoking CPU time, which may be as easy as not giving the offending task more CPU time.
Typically, when a task needs storage space for a new object, a memory management system within the computing system allocates memory to the task from a heap. At some time after the task releases the allocated memory, the memory management system reclaims the memory using a garbage collector.
Modern garbage collection systems typically move data to make allocation faster and to improve data locality. Garbage collection is often performed using generational techniques, where long-lived (also known as old or tenured) data is moved around less frequently, and new (young) objects are allocated in a new (young) generation where they either die quickly or are moved around to finally get promoted (tenured) to an old generation.
FIG. 1 illustrates heap memory organization 100 of a typical generational garbage collector. Heap memory organization 100 is divided into old generation 110 and new generation 102. New generation 102 follows a design suggested by Ungar, (Ungar, D., “Generation Scavenging: A Non-Disruptive High Performance Storage Reclamation Algorithm”, ACM SIGPLAN Notices 19(5), April 1984), and is sub-divided into the creation space eden 104, and an aging space, consisting of from-area 106 and to-area 108.
New objects are initially allocated in eden 104, except for large arrays, which are allocated directly in old generation 110. When the new space is garbage collected (scavenged), the survivors from eden 104 are copied into to-area 108 along with objects from from-area 106, whose age is below an established threshold. Mature objects (i.e. with age above or equal to a threshold) are copied to old generation 110. This threshold is established to give optimum performance for the garbage collector and is typically several cycles of garbage collection. The age of each object in to-area 108 is incremented and, finally, the roles of to-area 108 and from-area 106 are swapped.
Garbage collection on old generation 110 is triggered when old generation 110 fills up. Old generation 110 typically fills up and triggers a collection less frequently than new generation 102, because only objects surviving several collections of new generation 102 are allocated in old generation 110.
Using a generational garbage collector as described above in conjunction with a multitasking virtual machine (MVM), however, presents problems. Allocating heap memory for multiple tasks in MVM requires that each object be tagged with a parent task identifier, which can lead to space overheads. It is also difficult to determine the amount of memory that is allocated to a specific task within MVM and, therefore, difficult to determine that a task has used more memory than it is allowed. Additionally, garbage collection requires all tasks in MVM to be suspended during garbage collection so that memory operations by a task do not interfere with the garbage collector.
Memory reservations can solve the problem but leads to wasteful memory management, since memory currently not used as guaranteed memory for any task is not used by any task. This unused memory is termed “surplus memory.”
Since heap memory is a precious resource in languages with automatic memory management, what is needed is a method and an apparatus to enable the utilization of surplus memory at all times.