A data center is a facility that provides connection lines for the Internet and maintenance and operating services, and the demand for such data centers is increasing in accordance with the widespread use of the Internet.
In a data center, servers, storage units, Layer 2 switches (L2 switches), etc. are housed in a number of racks, and L2 switches serve to transmit all the traffic together from servers or storage units to a host.
Meanwhile, in accordance with the development of cloud computing, the number of virtual machines (VMs) within a physical server is increasing, and accordingly, communication control is becoming more complicated in a network within a data center. Under these circumstances, it is desirable to construct a data center that secures data traffic and implements high-quality communication even though the number of VMs is increasing.
As known bandwidth setting techniques, a technique for allocating a surplus bandwidth in accordance with the proportion of requested bandwidths has been proposed in, for example, Japanese Laid-open Patent Publication No. 2003-069627.
In an operation management system within a data center, bandwidth setting control for allocating bandwidths to VMs is performed. As the number of VMs increases, bandwidth setting for allocating a bandwidth to each VM becomes more complicated, thereby increasing a load imposed on the operation management system.
An increase in the load increases delay time, which makes it difficult to speedily perform bandwidth setting, thereby decreasing the communication quality in a network within a data center.