In computer science, a virtual machine (VM) is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system. Each VM may function as a self-contained platform, running its own operating system (OS) and software applications (processes). Typically, a virtual machine monitor (VMM) manages allocation and virtualization of computer resources and performs context switching, as may be necessary, to cycle between various VMs.
A host machine (e.g., computer or server) is typically enabled to simultaneously run multiple VMs, where each VM may be used by a local or remote client. The host machine allocates a certain amount of the host's resources to each of the VMs. Each VM is then able to use the allocated resources to execute applications, including operating systems known as guest operating systems. The VMM virtualizes the underlying hardware of the host machine or emulates hardware devices.
Often times, a VM that is centrally hosted may require migration for a variety of reasons, including load balancing on the host server, maintenance of the host server, upgrades of software and/or hardware of the host server, and so on. Presently, solutions are offered for migration of VMs. Yet, a problem that arises with current implementations of VM migration is that they typically wait for input/output (I/O) requests associated with a migrating VM to be completed on its corresponding origin host machine before its operations can be resumed on a destination host machine.
Under these current implementations, completion of the migration process for a VM may be unnecessarily delayed and processing resources on the corresponding originating host machine may be inefficiently utilized, thereby resulting in performance degradation of the origin host machine.