Along with fuller and expanded open server performance and functions, server virtualizing software (VMM: Virtual Machine Manager) has come into wide use as a method for efficiently utilizing the CPU core mounted in the server. The VMM creates a plurality of virtual machines by virtualizing the CPU mounted in one physical server and computer resources such as the memory and I/O device, to operate OS and applications on the respective virtual machines.
Virtualization support functions are gradually becoming utilized in the CPU, memory, or I/O devices in order to alleviate the drop in performance (overhead) that accompanies VMM processing. Intel Corporation and AMD Corporation respectively provide VT-x (Virtualization Technology for x86) for executing VMM functions and SNV (Secure Virtual Machine), as well as VT-d (Virtualization Technology for devices) to allow directly operating the memory from an I/O device and IOMMU (Input/Output Memory Management Unit).
The performance required for virtualizing NIC (Network Interface Cards) that drastically improve communication bands is becoming important in I/O devices. Whereupon multi-queue NIC capable of mounting a plurality of queues (modules providing send and receive functions for network frames) in the virtual machine then appeared. Technology known in the related art for implementing multi-queue NIC includes the VMDq (Virtual Machine Device queue) by Intel Corporation, and SR-IOV (Single Root I/O Virtualization) standardized for conformance to PCI (Peripheral Component Interconnect) standards, etc.
In the VMDq and SR-IOV the software operates the I/O by way of different interfaces. Rather than just a queue, the NIC must also provide a PCI configuration register for holding basic I/O device information. In the VMDq, the PCI configuration register is only one set and functions to provide just a plurality of queues. The SR-IOV on the other hand functions to provide plural sets of queue and a PCI configuration register. Each of these sets contains minimum functions called VF (Virtual Functions) for use as autonomous NIC. In the SR-IOV, the module controlling the entire I/O card including the VF is called a PF (Physical Function).
Until the appearance of the multi-queue NIC, VMM required a software process (software copy) to receive all frames, and copy the frames to the memory of a suitable virtual machine after investigating the destination of each frame. However, as the network band improved to 1 Gbps and 10 Gbps this software copying became a bottleneck that prevented utilizing the band of the physical NIC. In multi-queue NIC on the other hand, each queue is independently assigned to the virtual machine so software copying is not needed since the frames are conveyed directly to the memory in the virtual machine.
Technology for utilizing this multi-queue NIC is disclosed for example in the US Patent Application Publication No. 2009-0133016 specification. The US Patent Application Publication No. 2009-0133016 discloses an I/O management partition scheme for controlling I/O devices including common functions conforming to SR-IOV.
Configurations utilizing a combination of two VMM having different characteristics are starting to be proposed in order to reduce the drop (overhead) in virtual machine performance. If for example a VMM (hereafter, called Lv1, VMM) capable of creating a large number of virtual machines with minimal computer resources could be made to operate on a virtual machine created by a VMM (hereafter called a Hypervisor) highly resistant to hardware faults, then a large number of virtual machines can be safely operated.
Technology to implement such a configuration is known for example in Japanese Unexamined Patent Application Publication No. 2009-3749. This Japanese Unexamined Patent Application Publication No. 2009-3749 discloses technology for “executing a user program comprised of a next generation OS containing virtual functions on a first virtual processor by selecting a guest status area for executing a user program on a second virtual processor and a host status area for executing the guest VMM according to the cause of the host VMM call-up, and rewriting the guest status area of the shadow VMCB for controlling the physical processor.”
The multi-queue NIC is technology provided for configurations utilizing just one VMM; however there was a need for utilizing virtual machines (hereafter called sub-virtual machines) on Lv1 VMM over a wide band even in configurations combining two VMM. Moreover, these virtual machines were easily added, possessing the feature of flexibility allowing live migration for moving to other physical computers while still in operation and this flexibility was also needed in sub-virtual machines.
A hypervisor implemented on a physical computer in a configuration combining two VMM, and assigns resources such as NIC installed on the physical computer to the virtual machines. The VMM operated on the virtual machine, further reassigns resources installed in the virtual machine to the sub-virtual machines. The OS and applications are operated on the sub-virtual machines.
Technology utilizing multi-queue NIC on a plurality of VMM is described for example in Japanese Unexamined Patent Application Publication No. 2009-301162. In Japanese Unexamined Patent Application Publication No. 2009-301162, the VMM is described as a “virtual machine manager” and the virtual machine as an “LPAR.” As described in this Japanese Unexamined Patent Application Publication No. 2009-301162, “In an environment where PCI-based SR-IOV devices are assigned by way of the IO switch to plural virtual machine managers on a physical computer; the PF is assigned to a first virtual machine manager, and a plurality of VF are assigned to LPAR for a desired virtual machine manager. When the second virtual machine manager has detected an event from the VF to the PF, it next communicates with the first virtual machine manager where the PF is assigned and executes the PF event on the first virtual machine manager. When the first virtual machine manager has detected an event from the PF to the VF, it next communicates with the second virtual machine manager where the applicable VF is assigned and executes the VF event in the LPAR on the second virtual machine manager.”