Frequently, computers are dedicated to individuals or to specific applications. For example, an individual owns or is assigned his or her own personal computer (PC). Each time a business hires an employee whose job entails access to a computer; a new PC would be purchased and installed for that new hire. In other cases, a PC or server may be used to perform a specific task. For example, a corporation could have a server for hosting the company's web site, another server for handling emails, and yet another server for handling financial transactions. This one-to-one correlation was simple, straightforward, flexible, and readily upgradeable. However, one drawback to this set-up is that it is inefficient from a computer resource perspective.
The inefficiency stems from the fact that most software applications do not fully utilize the full processing potential of the computer upon which that software is installed. The processing power of a computer is largely defined by its interconnected hardware components. However, when creating software, programmers do not know the specific hardware capabilities of the computers upon which their software is to be ultimately installed upon. Consequently, programmers tends to be extremely conservative when creating software in order to ensure that software can run on the vast majority of conventional, contemporary PCs or servers. As a result, software applications do not push the envelope set by hardware constraints. Furthermore, some applications may consume a great deal of processing power, while other computer applications are inherently less computing intensive. When the PC or server is running less computationally intensive applications, much of its hardware resources are underutilized. Furthermore, given hundreds or thousands of computers networked in an enterprise, the cumulative effect of the amount of wasted computing resources adds up.
In an effort to take advantage of all the underutilized computing resources, there have been efforts to design “virtual” machines. The concept of virtualization broadly describes the separation of a resource (e.g., computing resource) and/or request for a service from the underlying physical delivery of that service. In one example, with regards to virtual memory, computer software gains access to more memory than is physically installed, via the background swapping of data to disk storage. Similarly, virtualization techniques is applied to other IT infrastructure layers such as networks, storage, laptop hardware, server hardware, operating systems, and/or applications.
Through virtualization, the virtual infrastructure provides a layer of abstraction between computing, storage, networking hardware, and the applications running on it and enables a more efficient utilization of computing resources. In general, before virtualization, a single computer is associated with a single operating system image. The machine's hardware and software is tightly coupled and running multiple applications on the same machine can create conflict. Moreover, the machine is often underutilized and inflexible, which all leads to an inefficient use of computing resources. In contrast, with virtualization, operating system and applications are no longer tightly coupled to a particular set of hardware. Advantageously, the virtualized infrastructure allows IT administrators to managed pooled resources across an enterprise, creating a more responsive and dynamic environment.
Basically, a virtual machine entails loading a piece of software onto a physical “host” computer so that more than one user can utilize the resources of that host computer. In other words, the virtual software package is loaded onto one or more physical host computers so that the processing resources of the host computers can be shared amongst many different users. By sharing computing resources, virtual machines make more efficient use of existing computers. Moreover, each user accesses the host computer through their own virtual machine. From the viewpoint of the user, it appears as if they were using their own computer. Users can continue to operate in a manner that they had grown accustomed to in interacting with computers. Thus, rather than buying, installing, and maintaining new computers, companies can simply load virtual machine software to get more leverage off their existing computers. Furthermore, virtual machines do not entail any special training because they run transparent to the user. In addition, virtual machines have the ability to run multiple instances of different operating systems concurrently on the same host or a group of hosts.
Amongst the benefits that virtual machines provide, one that a user may find particularly useful is the ability to replicate or clone a virtual machine. For example, a user that wants to run a backup application on the original virtual machine can run the backup application on the cloned virtual machine instead. By running the backup application on the cloned virtual machine, a user can leave the operations running on the original virtual machine uninterrupted.
In one example, the original virtual machine is running on a first host computer having a first set of computing resources and the cloned virtual machine is running on a second host computer having a second set of computing resources. Hence, an operation (e.g., a data mining operation) can be performed on the cloned virtual machine without draining the computing resources from the original virtual machine. In other examples, a user can perform other types of operations, such as scanning for viruses, running simulations, performing tests for new applications programs, mining data, and/or monitoring certain functions on the cloned virtual machine without draining computing resources from the original virtual machine.
However, under traditional approaches, in order to clone an original virtual machine, the original virtual machine has to be shut-down before a replica or clone can be created. This is inconvenient and/or inefficient because shutting-down the original virtual machine causes an interruption to the functions that it serves. In one example, the original virtual machine may be a server dedicated to taking orders from online customers. Periodically, the original virtual machine would need to run backup operations to guard against possible system crashes that could lead to loss of data. Because running backup operations on the original virtual machine is both time consuming and drains a significant amount of computing resources, running backup operations on a clone is often preferred. Unfortunately, under traditional methods, the original virtual machine (e.g., a server) has to be shut-down before a clone can be created. However, this is undesirable because shutting-down the original virtual machine may interrupt a customer's shopping experience and lead to loss of sales for a corporation.