Businesses are making tremendous investments in computer hardware and data centers. Meanwhile, the costs associated with powering and cooling the data centers are steadily increasing. To make matters worse, data center real estate is at a premium while demand relentlessly expands for more computer hardware to produce the sheer processing power necessary to meet the complex and growing needs of the businesses. Juxtaposing the need for more computer hardware and larger data centers is a troubling statistic that on average only 8-12% of the processing power of any given machine used in a data center is active, while the processors remain essentially idle the rest of the time.
For example, large batch processing machines used by banks are configured to run large batches of reconciliations. But when a machine is not performing the batches of reconciliations, it may in essence be “wasting” processing power until another batch of reconciliations begins, or until the machine is removed or powered off for maintenance. The wasted processing power results in bloated information technology budgets and an overall increase of costs to the businesses.
Virtualization of computer resources is changing the face of computing by offering a way to make use of the idling machines to a higher degree. Virtualization is a broad term that refers to the abstraction of computer resources. In other words, physical characteristics of computing resources may be hidden from the way in which other systems, applications, or end users interact with those resources. The most basic use of virtualization involves reducing the number of servers by increasing the utilization levels of a smaller set of machines. This includes making a single physical resource such as a server or storage device appear to function as one or more logical resources. Additionally, it can make one or more physical resources appear as a single resource. For instance, if a server's average utilization is only 15%, deployment of multiple virtual machines onto that server has the potential to increase the overall utilization by a factor of 5 or more. Thus, not only is the usage of each machine more efficiently managed, but the usability of the system as a whole is also enhanced.
While the virtualization of computer resources promises to deliver many benefits, there are worrisome problems that lurk beneath the surface of this new and exciting computing trend. A virtual machine may be a single instance of a number of discrete identical execution environments on a single computer, each of which runs an operating system (OS). These virtual machines act as individual computing environments and therefore are subject to many of the same operating deficiencies found in standard physical computing environments. The virtual machines can be configured improperly, often by well-intentioned technicians or operators, and then broadly deployed. Operating systems, applications, and configurations can be modified from the expected state, thereby creating a drift between the expected and actual machine configuration.
Additionally, the lifecycle of a virtual machine can vary widely depending upon the specific operation that it was provisioned and intended for. No longer must a physical server be dedicated to running a monthly task (such as billings and reconciliations). A virtual machine can be provisioned with the same OS, applications, and configurations and placed into physical storage until it is ready to execute. Once copied to a physical machine, it can be executed, perform its monthly cycle functions, and then be shutdown and returned to storage. In this way, virtual machines may be used much like physical servers are today, but may operate less frequently, e.g., running for just hours or minutes at a time rather than months or years, as was often the case with a physical server. As a result, no longer are the auditors, technicians, or other operators able to sit down at a specific physical server that is dedicated to a specific task or group of transactions. Instead, virtual resources of an entire data center are used to perform the transactions. It is therefore difficult to know which physical server ran which transaction, what its state was, whether correct software was being used, whether correct controls were in place, whether they were compliant with regulatory environments, and so forth.
Another problem that threatens the viability of the virtualization movement is that of access control, security, and data integrity. Whereas before, gaining access to a data center most often required interaction with physical servers, buildings, and people, in a virtualized environment, such safe guards are lessened. For example, before virtualization, adding a physical server to a data center involved somebody swiping an access card or other security measure to allow access to the data center, carrying a box into the data center under the supervision of other IT professionals or building managers, and installing the physical server into a rack. With the advent of virtualization, theoretically a person can sit in a remote location and install a new server into the virtualized environment without ever needing to physically access the data center. Thus, the ability to control the data center environment is diminished. And while malicious activity accounts for only about 3-5% of data center issues, most of the data center issues are caused by well-intentioned people who are either inadequately trained or make honest mistakes, thereby leading to system or component failures, which can sometimes be very severe—even catastrophic.
Accordingly, a need remains for a way to identify and authenticate the integrity of virtual machines and their components. The present application addresses these and other problems associated with the prior art.