A server is a computer system that provides some type of service to one or more client computer systems. Clients typically access the service using a network connection: local clients over a local area network (LAN), remote clients over a wide area network (WAN).
A server image is a logical embodiment of a server that contains all of the data needed to boot and operate one or more services on a computer. A server image typically includes (but is not limited to) a kernel and operating system(s), device drivers (that are normally associated with hardware-related components), application software and data, and configuration settings associated with the network and storage environments.
A server can run an image by having the image installed into its permanent memory or onto a storage device accessible to the server. Alternately it can dynamically access the image via a network connection.
Because a server image includes device drivers and several other hardware-related components that are specific to the computer hardware on which it runs, and because the image includes configuration settings for the network and storage environments surrounding the computer on which it runs, an image will not function properly when moved from one computer environment to another without being significantly reconfigured. The “migration” process moves an image from one computer to another, reconfiguring it as appropriate for the new computer hardware and environment.
The embodiment of a single server running an image to provide one or more services is often called a “workload”. On the basis of current technology, typically there are three ways to run a workload, e.g., when the single server is functioning as: 1) a physical server, 2) a virtual server, and 3) a cloud computer. A physical server is a dedicated physical computer running a single workload such that the operating system has exclusive, direct access to the computer's hardware. A virtual server is a workload running concurrently on a virtualization host such that the virtualization host intercedes between the computer hardware and the operating system within the workload to manage access to the physical resources of the underlying computer. Common virtualization hosts would include computers running a VMware™ or Xen™ hypervisor. A cloud computer is a workload running on a pool of physical and/or virtual resources that can be dynamically allocated on demand to a varying number of workloads. Clouds can be “private” such that the physical resources are owned by the same entity that owns the workloads, or “public” such that the physical resources are owned by a third party and made available for use by the workload owner, typically for a fee.
Physical, virtual, and cloud servers provide different tradeoffs between total cost of ownership (TCO) and performance. Physical servers generally provide the best performance but generally have the highest TCO. Virtual servers reduce TCO by running multiple workloads on a single physical computer, but generally provide lower performance because they cannot provide a single workload with access to all the resources of that computer. The use of cloud servers can greatly reduce the capital cost component of TCO when dynamically scaling a service to match its current load. This is particularly effective when using public clouds where the capital costs are born by a third party.
The optimal placement of a workload, whether on a physical, virtual or cloud server, might change over time for many reasons such as the life cycle (development, test, production, etc.) of the service, the number of clients currently accessing the service, or the availability of more efficient physical resources. The TCO of a workload would be greatly reduced if there were a way to rapidly migrate it from one server to another, freely moving between physical, virtual, and cloud servers so that it can always be placed on the most cost-effective resource that meets its current needs.
Conventionally, the process of migrating a workload from one server environment to another is largely a manual process that is time consuming, error prone, and very expensive. The automated migration tools that exist today are limited in capability. Tools provided by the virtualization vendors such as VMWARE™ and CITRIX™ typically providing migration into their specific hypervisor environment. More general purpose tools such as Symantec's Ghost™ and Platespin's Migration™ Manager™ usually do not support cloud servers and cannot work outside a corporate LAN environment.
Therefore, there is a long-felt but unresolved need for a system and/or method that provides the ability to freely migrate a workload between any types of environments, e.g., between physical, virtual, and cloud environments.