In the past decades, the prevalence of computers and their infusion into the mainstream of life has increased dramatically. As consumer use explodes, the increased consumer reliance has fueled an increasing demand for diversified functionality from this information portal. The cycle time of satisfaction for information is constantly decreasing as consumers seek faster information access. Today, in a typical implementation, the information is stored in memory of the device, in bits and bytes and is recalled with memory management techniques. As the demand for information in less time continues to escalate, the need for memory management increases in importance.
In the typical existing memory management schema, memory is partitioned into discrete areas for use called buffers. Information required is retrieved from secondary storage, such as hard disks, floppy disks, tape and other non-volatile memory. After the data is retrieved, for performance reasons, it is stored or “cached” in memory buffer area for a given period of time before being overwritten. Keeping the most popular data cached in resident memory or memory buffer permits continuous use of the data by the requesting application. Further, keeping the data in memory buffer or resident memory reduces or even eliminates the secondary storage input/output (I/O) operation and the time penalties involved in secondary storage I/O as compared with faster, albeit more expensive cache memory or resident memory. One method of memory partitioning is with the use of virtual memory managers, (VMM), which divide the memory into fundamental blocks called pages. In a typical VMM these pages are the same size and the individual pages are cached with a buffer. The static page size is selected for ease of management by the VMM, which results in a size not necessarily optimum for the file system or disk device requirements. In a typical VMM implementation multiple pages could be stored within a single buffer. A problem arises when the size of the data being retrieved cannot be divided and stored evenly among one or many VMM pages. The situation could leave an odd, only partially filled, page utilizing the entire static page memory allocation. The problem is further exacerbated when the odd page requires the use of an additional buffer or if the number of pages does not evenly divide and store among the buffer size causing further inefficient memory allocation.
Buffers can be made and managed in various sizes with similarly sized buffers being arranged into groups and subgroups. Requests for buffers of a particular size can be satisfied from a grouping of buffers of that same size. If there are no buffers that satisfy the requested size, additional operations must be incurred to either split larger buffers or join smaller buffers into an appropriately sized buffer.
The additional overhead of splitting or joining buffers has its own set of design choices. Among these, fragmentation and the penalties of joining and splitting buffers and the effect on their constituent users are of paramount concern. For these reasons a good design choice attempts to ensure that memory buffer requests can predominantly be satisfied within the subgroup of buffers with the same size as the request.
The critical buffering decision arises when additional data is required from secondary memory by the requesting application and all the buffers are in use. What buffer should be utilized for the new data needed, more typically, what buffer should be overwritten with the new data? Mainstream memory management theory calls this replacement policy. This process of overwriting data stored in cache or memory buffer is also known in the art as a buffer steal. Different decisional algorithms exist and take their names from the type of replacement approach employed. For example, there are several commonly used algorithms known as “Least Recently Used” (LRU), “Least Frequently Used” (LFU), and “Most Recently Used” (MRU). More specifically examining the LRU algorithm as its name implies, as new buffers are needed, the buffer selected for replacement is the one that hasn't been accessed for the longest period of time.
Although these techniques of LRU, LFU, and MRU taken alone or in combination are useful for determining which buffer to steal so as to minimize the necessity of rereading replaced data, these techniques are not without their shortcomings. One shortcoming is that regardless of the selection criteria, the goal in buffer stealing is to minimize the necessity of rereading replaced data. For instance, a buffer that is stolen and whose data is written to secondary store may actually be immediately re-accessed by an application and thus incur a new buffer steal and reread of the previously cached data from the secondary store. Accordingly, a need exists to determine how long something in memory buffer has been thrown out to optimize buffer and buffer sizes in a given group in order to minimize the number of re-access or re-reads of data from secondary storage back into the memory buffer.
Further, the use of LRU, LFU, MRU taken alone does not help with buffer size determination and allocations. For example, often times pages are dropped or stolen based on too little information, such as the LRU where the decisional information used is the greatest time interval from the last access. The system is unable to differentiate between buffers with infrequent access to those with little or no access without expending substantial, valuable system resources keeping infrequently buffers used in memory. Memory management systems that provide for buffers of varying sizes and then segregate these various size buffers into subgroups must balance the need for buffers of one size versus the need for buffers of a different size. An optimum-balancing algorithm would only steal buffers that are not going to be used again in the near future. This may mean that stealing within the subgroup of the properly sized buffers is sufficient, or stealing from larger buffers and splitting, or stealing from smaller buffers and coalescing, is required.
Therefore a need exists to overcome the shortcomings with the prior art as discussed above and to provide a system and method for optimizing steal events and the allocation of buffer memories of various sizes.