In computing networks, a virtualized environment refers to the collaboration or combination of hardware and software resources and their corresponding functionalities. Network virtualization, typically, involves platform virtualization combined with resource virtualization. Virtual resources are implemented such that the elements or systems that interface with said virtual resources are unaware of the interface requirements for the underlying system components, whether in the form of hardware or software.
Virtualization may be used to help combine multiple physical resources into shared pools. Alternatively, one physical resource can appear as multiple virtual resources. In some computing networks, shared resources are provided to computing systems and other devices connected to the network, on demand, by way of deploying one or more virtual machines (VMs). A VM, generally, runs as a software application to provide a platform-independent programming environment that abstracts away details of the underlying hardware.
VMs may be executed over a hardware resource (i.e., a host machine) to service client requests. A hypervisor is typically implemented by a layer of code in software or firmware and helps implement a VM over the host machine. The hypervisor may be executed in a privileged environment on the host machine and configured to interact with the underlying hardware to enable sharing of resources among one or more VMs.
In some instances, it may be desirable to place or migrate a VM from one location (e.g., a first host) to another location (e.g., a second host) in the virtualized environment to satisfy or improve certain service requests or management goals (such as load balancing or global energy consumption). Management requests get eventually translated into plans of operations where each plan typically specifies a sequential order of execution for the operations, anticipating the successful and safe execution of said operations.
When multiple plans are requested for execution, the environmental resources that support the execution of each plan are to be considered in advance. Otherwise, changes in the execution environment may prevent the successful completion of a planned event or operation. For example, computing a plan (e.g., Plan A) for operations such as VM relocation or deployment may take some time. If another plan (e.g., Plan B) starts execution prior to the execution of plan A, and locks a resource that is to be allocated to Plan A, then Plan A may fail.
While sequential plan scheduling is safe (i.e., avoids deadlock situations, etc.), sequential execution generally results in substantial latencies in provisioning the requested services. These latencies contribute to poor user experience and sub-par system performance. In addition the more time a plan takes to be executed the bigger are the chances that its goal will lose relevance (even if the plan can be executed successfully). For example, a plan to better load balance a network may still be executable but no longer relevant if many deployments have occurred in between.