A super computer which is required to carry out a complicated scientific technical calculation at high speed employs a configuration of a parallel computing processing system including a number of calculating nodes coupled with a storage for containing data to be used for computing via a high-speed communication network as well as providing each processor executing computing processing with higher performance. In the above parallel computing processing system, hereinafter referred to as a parallel system, in order to improve processing efficiency of the entire system, efficient distribution of data stored in the storage to the number of calculating nodes as necessary is required.
In recent years, in consideration of mounting difficulty in enhancing performance of a processor itself due to package density or the like, in enlarging a scale of the parallel system due to restriction for power consumption, installation area or the like, etc., a so-called heterogeneous configuration is proposed in which processors of different types are used according to the processing details of the program designated by the job submitted to the system in order to improve processing efficiency of the parallel system. In the heterogeneous configuration, an ordinary processor for executing a general computing processing, a GPGPU (General-Purpose computing on Graphics Processing Units) for sending a special computing processing in the program to a GPU (Graphics Processing Unit) dedicated to a graphics processing usually, and the like are employed. With adoption of the heterogeneous configuration, improving a speed of computing processing at each calculating node of the parallel system is expected, however, the aforementioned efficient distribution of data still remains important.
In this respect, conventionally, the technique disclosed in Patent Literatures 1-3 for example are proposed. Patent Literature 1 discloses in the abstract, for the purpose of providing a hierarchical storage system of both high performance and lower power consumption, a storage system 2 coupled with a computer management server 18, including a first hierarchical storage apparatus 11 providing a first volume 51 for storing files, a second hierarchical storage apparatus 12 providing a second volume 52 for storing files, and a storage management server 19, the server 18 having information of jobs executed on the computer 14 sequentially and information of job queues under execution or waiting for being executed, the server 19 collecting/analyzing the above information and specifying the volume 52 which the job accesses, calculating mean waiting time before starting execution of each job from the job queue information, and working a disk apparatus constructing the volume 52, and calculating threshold time required for copying the volume 52 to volume 51, when the mean waiting time is shorter than the threshold time at the time of job submission, the execution of the job being delayed by the threshold time. Patent Literature 2 discloses in the abstract, for the purpose of facilitating performance guarantee of minimum guarantee type on resource consumption of a storage device of each tenant, and predicting resource consumption required based on the input/output characteristics of an application, a storage resource control system 100 for controlling resource availability of a storage 211 by controlling bandwidth consumption of a network 212 by a bandwidth controller 221 comprising a resource predicting part 120 for predicting resource consumption of the storage 211 as required from a linear model 112 consisting of an input/output processing volume model and a bandwidth consumption model and the I/O characteristics 121 based on the input/output processing volume model and for predicting the bandwidth consumption of the corresponding network 212 based on the bandwidth consumption model, and a bandwidth determining part 130 for determining bandwidth control information 132 from predicted bandwidth consumption 122 based on a setting policy 131. Patent Literature 3 discloses, in claim 1, an accelerator management apparatus comprising a first storage storing an accelerator identifier for identifying an accelerator used by an application correlated with an application identifier for identifying an application, a second storage storing an accelerator identifier of the accelerator installed in a slot correlated with each slot identifier for identifying each of the slots of an extension box containing the multiple accelerators, a first identifying part 26 identifying an accelerator identifier corresponding to the application from among the first storage when a request for executing an application is received from a host, a second identifying part identifying a slot identifier corresponding to the accelerator identifier identified by the first identifying part from among the second storage, and an allotting control part allotting to the host a slot identified by the slot identifier identified by the second identifying part.