In view of the field of this invention, it is useful at this point to briefly review the process of server virtualization and the problem of distributing source servers in target servers.
On-demand computing is an enterprise-level computing model that allocates computing resources to an organization and its individual users on an as-needed basis. This process enables the enterprise to efficiently meet fluctuating computing demands. For example, if a group of users is working with applications that demand a lot of bandwidth, the on-demand computing model can allocate additional bandwidth specifically to this group and divert bandwidth away from users who do not need it at that time. One of the main tools used in implementing the on-demand-computing model is server virtualization as will be described below.
Although modern operating systems are inherently multitasking, nonetheless, there is always some interaction between the applications running under any given operating system, since the operating system must allocate resources between them. As a result, a faulty or overloaded application can significantly degrade or even disable other applications running under the same operating system. An ideal solution to this problem would be to dedicate an individual server to each application, since this would ensure minimal interaction between the applications. Furthermore, this arrangement would allow managers to run multiple operating systems on a given network, each of the operating systems being configured to provide optimal performance for different tasks such as development, deployment and control. Unfortunately, in most cases this solution is simply too expensive to be practically realizable.
One means of overcoming this problem is to virtualize a plurality of servers (known as source servers) running different operating systems on a smaller number of target servers. The virtualized servers can then be easily configured and reconfigured in accordance with user demands. Since different target servers may have different available resources and each source server may have different resource requirements, the manner in which the source servers are distributed amongst the target servers effectively determines the number of target servers required to service the needs of a network.
However, any investigation of the distribution of source servers in target servers must be based on a detailed study of the network and must consider a large number of parameters. To date, it has been necessary to perform manual computations to determine an optimal distribution of source servers within target servers. However, since the computational time of these types of optimization problems typically increases exponentially with the number of parameters considered, the large number of parameters typically considered during server virtualization makes the manual computational approach extremely tedious and time consuming. As a result, such manual computations are typically only performed for a comparatively small number of source servers (i.e. less than 25 source servers).
Similarly, if a single source server, target server or virtualization software parameter is changed; it is necessary to repeat the entire process of manual optimization for the new parameter set. Thus, it is not easy to perform experiments to investigate the effects of individual parameters on the distribution of source servers.
One way of simplifying the optimization problem is to reduce the number of parameters considered therein. For example, it is possible to focus only on the CPU and memory parameters of the source servers and target servers. However, this simplification leads to a less accurate or unreliable solution insofar as the number of target servers determined with the reduced parameter set is typically smaller than that determined with a more complete parameter set. For instance, if the only parameter considered were the CPU speed of a source server, x target servers might be sufficient to accommodate a group of source servers. However, if the memory requirements of the source servers were also to be considered, it might be necessary to use more than the x target servers to accommodate the source servers (depending on the memory resources of the target servers). In any case, it will not be possible to accommodate the source servers in anything less than x target servers.
In addition, since the result obtained with the reduced parameter set is unreliable, it is often necessary to change the distribution of the source servers at a later date when the network behaviour is better understood.