1. Field of the Invention
The present invention relates generally to integrity measurement and, more particularly, to methods and system to verify the integrity of a software-based computer system.
2. Description of the Related Art
The computer industry has shown increased interest in leveraging integrity measurements to gain more confidence in general purpose computing platforms. The concern regarding this trend is that the approach to integrity measurement promoted by new security technologies has yet to sufficiently mature for realization of integrity measurement's potential security impact.
In the general sense, measurement is a process of characterizing software. There are any number of ways in which the same piece of software could be characterized, each potentially resulting in a different measurement technique. The reasons for measuring a piece of software are varied, with some measurement techniques being more appropriate than others.
One common technique is hashing. A hash is computed over static regions of the software and used as the characterization. Although hashes are easily computed, stored and used, hashing is by no means the only possible measurement technique. Existing measurement systems tend to rely on hashes of security relevant objects such as the BIOS, the executable code of an operating system, or the contents of configuration files. Hashing is extremely effective as a measurement technique in certain circumstances. However, hashing does not always produce results that allow a complete determination of integrity.
A fundamental property of an Integrity Measurement System (IMS) is the use of measurement data as supporting evidence in decisions about the integrity of a target piece of software. An ability to produce accurate assessments of software integrity allows an IMS to contribute significantly to security in many scenarios. Without measurement techniques appropriate to the decision for a given scenario, an IMS cannot correctly determine integrity.
For example, to the user of a system, an IMS could help determine if the system is in a sufficiently safe state to adequately protect data. It could help determine the pedigree of the provider of a service or software, as well the software itself. An Information Technology (IT) department could benefit from an IMS to help ensure that systems connected to its network are indeed in some approved configuration. To a service provider, an IMS enables decisions about granting a particular service to include statements about the integrity of the requesting system and/or application. In each of these scenarios, the reasons for needing an integrity decision, as well as the type of measurement data suitable for that decision might be different.
There are multiple ways in which an IMS architecture could be implemented, four of which are shown in FIG. 1. They share several common elements: a measurement agent (MA), a target of measurement (T), and a decision maker (DM). An MA collects measurement data about T using some appropriate measurement technique. The MA needs to have access to T's resources and be able to hold the measurement data until needed. The DM acts as a validator or appraiser responsible for interpreting measurement data in support of integrity decisions. In an IMS that uses hashing for measurement, this component would likely be responsible for comparing hashes to known good values. Lastly, an IMS must have a means of presenting collected data to a DM. Depending on the implementation, this could be as simple as displaying measurements to a user or administrator, but more complex systems require protocols for communicating the authenticity and integrity of measurement data to the DM.
One common notion of an IMS has the MA and T co-resident on the user's platform, while the DM runs on a separate machine controlled by the owner. Measurement data is transferred to the DM using an attestation protocol. However, it should be noted that many other possible layouts for an IMS are also appropriate.
When designing an IMS to meet the needs for any given scenario, how the above-mentioned components are integrated into the system, as well as properties of each of them, can greatly impact the effectiveness of the system's ability to provide the quality of measurement data necessary for the DM to provide desired security benefits. These design choices also impact the ability of a given IMS to support multiple scenarios for the same platform. The design of an IMS tailored to a specific scenario is likely to differ greatly from one intended to serve a more general purpose. Considering an IMS in terms of these component pieces yields different dimensions by which an IMS can be evaluated.
The use of an IMS raises privacy concerns. Owners of measurement targets may be hesitant to release certain types of measurements to a DM for a variety of valid reasons. IMS component design impacts an IMS's ability to adequately address privacy concerns.
Measurement deals with what might be described as expectedness. Eventually a decision will be needed to determine if the software relied upon for a critical function is indeed the expected version that was previously determined to be trustworthy to perform the function and is either in a known good state or perhaps not in a known bad state. A suitable measurement process must produce data sufficient for an IMS to make this determination.
In order to assess the sufficiency of any measurement process, the measurement data's intended purpose must be understood. A technique deemed sufficient for one measurement scenario might prove completely inadequate for another. An IMS's DM and how it relies on integrity evidence for security will ultimately determine if a given measurement technique is suitable.
Integrity measurements are evidence to be used in decisions relevant to the execution of a piece of software or some other software that depends on it. These decisions require an assessment of software state and perhaps its environment. Since any such decision's validity will rely on the quality of the evidence, where quality is reflected in terms of how accurately the measurement data characterizes those portions of the software relevant to the pending decision, it is useful to consider integrity measurement techniques based on their potential to completely characterize a target irrespective of scenario or system. Techniques with greater potential for complete characterization should be considered better suited for decision processes requiring a true measure of integrity.
Besides understanding a measurement process' ability to characterize the target, there are other characteristics of an IMS's MA useful for examining its sufficiency for producing adequate evidence of a target's expectedness. Among them are a MA's ability to produce all evidence required by the IMS's DM and to reflect in that evidence the current state of the potentially executing target.
In order to discuss integrity measurement systems, it is necessary to have a common measurement vocabulary. With a suitable vocabulary, it becomes possible to assess and compare measurement techniques to determine their suitability in a given IMS for particular measurement scenarios. It would also be useful for describing how the different components of an IMS have been integrated to meet functional and security requirements.
There are six properties of the measurement component of an IMS to serve as the beginnings of such a vocabulary. They provide several dimensions that have proven useful not only to assess and compare existing IMS but have also helped motivate the design of new IMS. These are not the only dimensions in which IMS could be discussed, and these properties are not intended to be canonical. They do, however, form a good framework for discussions about important aspects of IMS. The measurement component of an IMS should:
Produce Complete results. An MA should be capable of producing measurement data that is sufficient for the DM to determine if the target is the expected target as required for all of the measurement scenarios supported by the IMS.
Produce Fresh results. A MA should be capable of producing measurement data that reflects the target's state recently enough for the DM to be satisfied that the measured state is sufficiently close to the current state as required for all of the measurement scenarios supported by the IMS.
Produce Flexible results. A MA should be capable of producing measurement data with enough variability to satisfy potentially differing requirements of the DM for the different measurement scenarios supported by the IMS.
Produce Usable results. A MA should be capable of producing measurement data in a format that enables the DM to easily evaluate the expectedness of the target as required for all of the measurement scenarios supported by the IMS.
Be Protected from the target. An MA should be protected from the target of measurement to prevent the target from corrupting the measurement process or data in anyway that the DM cannot detect.
Minimize impact on the target. An MA should not require modifications to that target nor should its execution negatively impact the target's performance.
Tripwire (G. Kim and E. Spafford, The Design and Implementation of Tripwire: A File System Integrity Checker. Purdue University, November 1993) was an early integrity monitoring tool. It allowed administrators to statically measure systems against a baseline. Using Tripwire enables complete integrity measurement of file system objects such as executable images or configuration files. These measurements, however, cannot be considered complete for the runtime image of processes. Tripwire provides no indication that a particular file is associated with an executing process, nor can it detect the subversion of a process.
Tripwire performs well with respect to freshness of measurement data, and the impact on the target of measurement. Remeasurement is possible on demand, enabling the window for attack between measurement collection and decision making to be quite small. Since Tripwire is an application, installation is simple and its execution has little impact on the system. But because it is an application, the only protection available is that provided by the target system, making Tripwire's runtime process and results vulnerable to corruption or spoofing.
Tripwire is also limited with respect to flexibility and usability. Decision makers may only base decisions on whether or not a file has changed, not on the way in which that file has changed. Tripwire cannot generate usable results for files which may take on a wide variety of values. These limitations are generally characteristic of measurement systems that rely on hashes, making them most effective on targets not expected to change.
IMA (R. Sailer, X. Zhang, et al., Design and implementation of a TCG-based integrity measurement architecture, Proceedings of the 13th Usenix Security Symposium, pages 223-238, August 2004) and systems like Prima (T. Jaeger, R. Sailer, and U. Shankar, Prima: Policy-reduced integrity measurement architecture, SACMAT'06: Proceedings of the Eleventh ACM Symposium on Access Control Models and Technologies, 2006) which build upon its concepts appear very similar to Tripwire when considered with respect to the described properties, but they do offer significant improvements. IMA's biggest advance is the protection of the measurement system and its data. Because it is a kernel module rather than user-land process, it is immune to many purely user-space attacks that might subvert the Tripwire process. However, it is still vulnerable to many kernel-level attacks. Subversion of IMA's measurement results is detectable by comparing a hash value stored in the TPM with the expected value generated from the measurement system's audit log.
IMA makes more complete measurements of running processes than Tripwire because IMA is able to associate running processes with the recorded hash values. However, results only reflect static portions of processes before execution begins. Because no attempt is made to capture the current state of running processes, fresh measurements cannot be provided to any decision process requiring updated measurements of the running process.
PRIMA extends the IMA concept to better minimize the performance impact on the system. By coupling IMA to SELinux policy (P. Loscocco and S. Smalley, Integrating flexible support for security policies into the linux operating system, Proceedings of the FREENIXTrack, June 2001) the number of measurement targets can be reduced to those that have information flows to trusted objects. This may also aid completeness in that measurement targets can be determined by policy analysis. The requirement for trusted applications to be PRIMA aware and required modifications to the operating system are development impacts on the target.
CoPilot (N. Petroni, Jr., T. Fraser, et al., Copilot—a coprocessor-based kernel runtime integrity monitor, Proceedings of the 13th Usenix Security Symposium, pages 179-194, August 2004) pushes the bar with respect to completeness, freshness and protection. Cryptographic hashes are still used to detect changes in measured objects, but unlike other systems, CoPilot's target of measurement is not the static image of a program and configuration files but the memory image of a running system. It also attempts to verify the possible execution paths of the measured kernel. The ability to inspect the runtime memory of the target is an improvement over file system hashes because it enables decisions about runtime state. Protection from the target is achieved by using a physically separate processing environment in the form of a PCI expansion card with a dedicated processor.
Although a considerable advance, CoPilot fails as a complete runtime IMS in two key ways. It cannot convincingly associate hashed memory regions with those actually in use by the target. It can only measure static data in predefined locations; dynamic state of the target is not reflected. The requirement of additional hardware in the target environment also impacts the target.
Other measurement systems have been developed. Unlike those discussed so far, some use computations on or about the target system rather than employ a more traditional notion of measurement such as hashing. One such system is Pioneer (A. Seshardri, M. Luk, et al., Pioneer:Verifying code integrity and enforcing untampered code execution on legacy systems, ACM Symposium on Operating Systems Principles, October 2005). It attempts to establish a dynamic root of trust for measurement without the need of a TPM or other hardware enhancements. The measurement agent is carefully designed to have a predictable run time and an ability to detect preemption. The measurement results can be fresh but are far from a complete characterization of the systems. Although in theory, this approach could support more complete measurement as long as the property of preemption detection is preserved.
Pioneer was designed to detect attempts of the target to interfere with the measurement agent, but it requires a difficult condition that the verifier to be able to predict the amount of time elapsed during measurement. The impact on the target system can also be great because in order to achieve the preemption detection property, all other processing on the target has to be suspended for the entire measurement period.
Semantic integrity is a measurement approach targeting the dynamic state of the software during execution therefore providing fresh measurement results. Similar to the use of language-based virtual machines for remote attestation of dynamic program properties (V. Haldar, D. Chandra, and M. Franz, Semantic remote attestation—a virtual machine directed approach to trusted computing, Proceedings of the 3rd USENIX Virtual Machine Research & Technology Symposium, May 2004), this approach can provide increased flexibility for the challenger. If the software is well understood, then semantic specifications can be written to allow the integrity monitor to examine the current state and detect semantic integrity violations. This technique alone will not produce complete results as it does not attempt to characterize the entire system, but it does offer a way in which integrity evidence about portions of the target not suitable for measurement by hashing can be produced.
Such an approach has been shown effective in detecting both hidden processes and SELinux access vector cache inconsistencies in Linux (N. Petroni Jr., T. Fraser, et al., An architecture for specification-based detection of semantic integrity violations in kernel dynamic data, Security '06: 15th USENIX Security Symposium, 2006). A very flexible system was produced that can be run at anytime to produce fresh results and that is easily extended to add new specifications. Better completeness than is possible from just hashing is achieved since kernel dynamic data is measured, but no attempt was made to completely measure the kernel. Completeness can only come with many additional specifications. Like CoPilot, a separate hardware environment was used to protect the measurement system from the target and to minimize the impact on the target at the cost of having extra hardware installed. However, it is subject to the same limitations as CoPilot.