An operating system (hereinafter, referred to as an “OS”) has a function of reserving and freeing a memory area of a requested size in response to a memory-reserving request and a memory-freeing request that are transmitted from various application programs.
For example, PTL 1 discloses an information-processing device that prevents the performance degradation of an application program (hereinafter, simply referred to as an “application”) by quickly responding to a memory-reserving request and a memory-freeing request that are issued from the application. This information-processing device reserves a memory area of a size that is necessary for the entire application upon activation of the application and assigns a portion of the area reserved upon activation to the application in response to a memory-reserving request that is issued during execution of the application. As such, this information processing device reduces execution time of memory reserving processing and memory freeing processing by reducing transmission of memory-reserving requests and memory-freeing requests from the application to a virtual memory.
Further, PTL 2 discloses a shared resource management system that makes resource management efficient and reduces resource returning processing costs. This shared resource management system manages resources in units of groups of a plurality of processes and provisionally returns resources which have been used. Then, if another group issues an allocation request, this shared resource management system actually returns the provisionally returned resources and assigns the returned resources to the other group. If no other group issues allocation requests, the shared resource management system performs return processing when all the processes included in the group have completed. In this way, for efficiency of resource management, the shared resource management system reduces the processing costs for returning resources by provisional returning, in which resources are returned both by each process and in a collective manner.
PTL 3 discloses processing relating to reserving and freeing a memory area.
To address increase of loads upon execution of applications, servers that are mounted with a high-performance many-core accelerator in addition to a host processor, have been increasingly used recently. The many-core accelerator includes a plurality of cores that perform operation processing, as well as, limited memory areas used by the cores.
When a server that is mounted with such a many-core accelerator uses the many-core accelerator, the following processing is performed: that is, processing of reserving memory equipped in the many-core accelerator, processing of transferring data from a memory used by a host processor to the memory equipped in the many-core accelerator, and operation processing in the many-core accelerator are performed. Further, processing of transferring data from the memory equipped in the many-core accelerator to the memory used by the host processor and processing of freeing the memory equipped in the many-core accelerator are also performed.
As such, a method as described above where a server uses a many-core accelerator is called as an “offload method.” Further, the function of an application for performing these series of processing is called as an “offload unit.” As described above, memory reserving processing and memory freeing processing are performed by an OS in response to a memory-reserving request and a memory-freeing request from an application.
The data transfer processing between the memory used by the host processor and the memory equipped in the many-core accelerator is performed using Direct Memory Access (DMA).