Currently, memory allocation for applications in many conventional operating systems (e.g., Linux, Unix, etc.) works by having one heap for each application, which can grow and shrink and in which all allocated memory is being held. A heap as used herein broadly refers to an area in memory reserved for data that is created at runtime. This generally works well as long as an application allocates memory in a sane manner, but there are a few shortcomings that tend to have some negative effects, especially for applications that allocate and deallocate lots of small chunks of memory (which most larger applications do, such as Firefox, Thunderbird, Evolution, etc.).
One of the negative effects is memory fragmentation over time. Due to the fact that newly allocated memory cannot always be fitted perfectly in a free space in the heap over time, the heap will have “holes” with unused non-allocated memory. Also, de-allocating memory inside the heap will increase fragmentation as well.
Another issue with the current approach is that heap can only shrink in size to the highest still allocated memory pointer. This comes from the fact that the heap cannot be re-organized so if an application frees memory at the start or middle of the heap, it is generally impossible to shrink the heap. In one extreme case, an application may allocate large amounts of memory and keep the last allocated pointer until the application finishes. That way the heap would be stuck at the maximum until the end of the application. This can happen in real life situations where at the end of a function memory for the result is being allocated before the work data is deallocated.
Furthermore, heap memory is resident set memory, which means that the whole heap is considered allocated memory towards the kernel. Under memory pressure, everything inside the heap of an application needs to be swapped out, even the unused “holes” inside the heap. A hole in general refers to a contiguous area of memory in the heap that does not map to any part of the main memory, and hence, cannot be used by the application. Currently, this can be worked around by using an undocumented call named malloc_trim( ) in GNU C Library (Glibc) in a Linux system, but that call is very computationally intensive and would either require some heuristics in Glibc when to call it or even changes in applications to use it when they have de-allocated large amounts of memory.