The present application relates generally to an improved data processing apparatus and method and more specifically to mechanisms for optimizing data centers based on a variety of factors.
Investments in computing resources represent a significant overhead for large organizations in today's society. Large organizations, such as the United States federal government, individual departments and agencies of governments, businesses such as International Business Machines Corporation of Armonk, N.Y., Bank of America, General Electric, Citigroup, and many others, may have thousands of computers of various types, configurations, capabilities, and levels of efficiency. It is an arduous task to manage all of these computing resources so that these organizations make efficient and optimum use of these computing resources. Efficient and optimum use of these computing resources includes the identification of non-utilized, under-utilized, or simply obsolete technology for decommission and replacement.
One major hurdle to optimizing computing resources of an organization is the reluctance of organization personnel to change. That is, when personnel are issued computing resources, e.g., a laptop, desktop computer, etc., they view that computing resources as being completely dedicated to them, i.e. the employee has 100% entitlement to the computing resources regardless of whether the employee actually uses those computing resources or not. This gives the employee a sense of confidence that if the employee ever needs the computing resources, they will be available since the employee has 100% entitlement to these computing resources. As a result, the employee is reluctant to give up those computing resources for other computing resources where the employee perceives that they are given less than 100% entitlement to the computing resources, such as in a shared computing resource environment, e.g., a virtualized computing environment or the like. That is, even though the employee may not be utilizing, or at least fully utilizing, the computing resources, the employee does not give up those computing resources because they would rather keep the guarantee that the computing resources will be available to them, no matter how inefficient or obsolete these computing resources may be, rather than take the risk that new computing resources may not be available when they need them.
Furthermore, known mechanisms allocate computing resources to employees but then do not have any mechanism determining whether the employee is using those computing resources to achieve the business purpose for which the employee was given those computing resources. Thus, often times, a computing resource may be allocated to an employee and that employee may use the computing resources, but not to achieve the business purposes for which the computing resources were intended and may in fact be using the computing resources for another purpose. Thus, while it may appear that the computing resources are being utilized, they are not in fact being used to benefit the organization. There is no known mechanism for detecting such situations and then handling to optimize the benefit of computing resource allocation to the organization.
Mechanisms are needed for incentivizing employees to relinquish non-utilized, under-utilized, or obsolete computing resources for new computing resources in such a way as to give them a sense of confidence that their needs will be met with the new computing resources. Moreover, mechanisms are needed for detecting situations where computing resources are not being used to achieve the business purposes for which they were allocated and then handling these situations so as to optimize the allocation of computing resources to the benefit of the organization as a whole.
Additionally, there is weak support currently for defining a “use” of a computing resource beyond the execution of an atomic workload. The “use” of a computing resource, as its user conceives it, is generally on a timescale of months, not seconds, encapsulating any number of different interleaved workloads, and including expected periods of non-use, which may be of multiple days of duration. Thus the optimization of an environment in which “use” is so defined is much more involved than simply finding an execution environment for atomic workloads.