1. Field
Systems and methods consistent with exemplary embodiments relate to a page allocation, and more particularly, to a method and a system for dynamically changing a page allocator, which can effectively manage a page pool by forking or merging page allocators in consideration of a system's state.
2. Description of the Related Art
In recent years, a multi-core hardware environment in which a plurality of processors (or CPU cores) are operated in one system has been more widely used. That is, after a dual core product entered the market, this trend has been more noticeable, and now, a many-core processor age is being opened beyond a multi-core environment.
In accordance with such a period background, chip densities of the processors have been increasing and multi-core architectures have been developed, so that on-chip processing resources have been increasing.
A multi-core chip recently has more than 10 processors, and one chip is expected to have several hundred processors in the near future.
As the number of processors included in one system increases, it becomes more advantageous to provide scalability of operating systems. That is, it is advantageous to control operations of main components of the operating system to effectively utilize a plurality of processors, and a page allocation scheme of a memory should be reconsidered according to this aspect.
The page allocation schemes according to the related art correspond to a global page allocation scheme and a local page allocation scheme, and page allocation schemes are statically determined in the two schemes.
FIG. 1 schematically illustrates a global page allocation scheme, and FIG. 2 schematically illustrates a local page allocation scheme.
Referring to FIG. 1, a global page allocator globally manages pages through a pool including a plurality of pages. A request for allocating a plurality of pages and a request for deallocating a plurality of pages are simultaneously processed through lock segmentation for one pool.
Such a global page allocator manages all the pages through one page allocator, so that it is easy to minimize memory fragmentation but there is a disadvantage in that scalabilities for the request for allocating a plurality of pages and the request for deallocating a plurality of pages deteriorate. Although a buddy allocator of a Linux system increases concurrency of a page pool management data access through lock segmentation in order to address this problem, there is a limit to the improvement in the scalability through this process.
Referring to FIG. 2, a local page allocator divides a plurality of pages into a plurality of pools, and manages each of the pools through a separate page allocator. Since each of the page allocators operates independently, each of the page allocators simultaneously processes requests for allocating a plurality of pages and requests for deallocating a plurality of pages. Since each of the local page allocators manages a separate page pool, a perfect concurrency for processing the requests for allocating and deallocating a page is ensured.
However, the local page allocator manages the pages while dividing all the pages, so that when memory loading is unbalanced, there may be a lack of pages even when there are sufficient free pages from a view point of the whole page pool. Further, when an allocator to which pages are allocated and an allocator which requests cancellation of the allocating are different from each other, there is a disadvantage in that a page fragmentation phenomenon may be generated among the allocators. The allocators may communicate with one another in an attempt to avoid this phenomenon, but there is a disadvantage in that the communication increases overhead in the page pool management.
Thus, there is a trade-off between advantages and disadvantages of the global allocation scheme and the local allocation scheme.