Challenges exist in determining what security policy is actually being enforced by computer systems. In computer systems where multiple applications are run on behalf of independent entities, the possibility exists for an application to influence or be influenced by another application. As used herein, “computer systems” refer to an environment of one or more physical machines, each running zero or more virtual machines. The physical and virtual machines may be, for example, heterogeneous in the hardware and software installed therein, and may be, for example, physically collocated or managed by the same administrative entity.
As used here, “entities” refer to users and/or organizations and/or administrators who cause one or more independent applications to be executed on one or more of the computer systems. These applications may execute simultaneously, their executions may overlap for one or more periods of time, or their executions may be mutually exclusive. Also, as used herein, “influence” refers to information concerning or describing the application's execution (for example, program source code, instructions or data stored in caches, memory, disks, tapes, optical, network or other storage devices, as well as information transmitted to or received from input/output (I/O) devices such as monitors, human input devices, wired and wireless network devices) and its environment (for example, application, operating system, virtual machine, and/or hardware scheduling algorithms, static or dynamic allocations to the application of system resources such as processing time on the main or peripheral processors, space on memory or storage devices such as those mentioned above, and/or energy consumed during the application's execution) that can be accessed by another application, modified by another application, or both accessed and modified by another application.
Sometimes this sharing of information is desirable from the point of view of the entities involved. For example, information flowing from one application to another is a natural part of distributed application design. An administrator may wish to specify that an application is to receive a “best-effort” allocation of resources after all other applications have consumed what they need.
However, sometimes this sharing of information is not desirable. For example, the information flowing from one application to another may contain a patient's personal medical information (that is, information considered sensitive by one or more of the entities). Therefore, for ethical and legal compliance reasons, the entity desires for that information not to be made accessible or modifiable by an unauthorized third application.
The concept of a security policy can be, for example, that in which a computer system enforces restrictions and/or permissions concerning how one system component may influence or be influenced by another system component. The security policy can describe how the computer system moderates access to the resources shared by the components (both shared within a single computer system and shared across multiple systems), and moderates access to any data contained by those resources in order to meet the entities' goals for protecting the components.
Existing approaches for security policy determination are specification-oriented, in that the policy is determined by querying the system about its currently configured state, or obtaining from the entities the security policy specifications they earlier provided to the computer system. An example of specification-oriented security policy determination is using the “exportfs” command to obtain a list of the exported local file systems under the Network File System (NFS) service.
Problems such as, for example, security policy composition and security policy verification, exist with specification-oriented security policy determination. The problem of security policy composition includes the initial synthesis of an expressive security policy. It is difficult and labor-intensive for an entity to create a security policy customized for its application's unique environment. Existing approaches include starting with a default policy and having the entity manually modify this policy to fit its expected application environment. This default security policy can be all-exclusive, all-inclusive, or statically pre-configured based on an expert or external analysis of another entity's application or installation. An example of the latter is a default rule set tuned to a specific operating system with installs that have been made available for the Tripwire intrusion detection tool.
Such an approach (that is, a best practice) is suboptimal because no provision is made for bootstrapping. The entity starts with no feedback from the computer system regarding its existing, working, productive security configuration, a state from which the current well-grounded de facto security policy can be analyzed, verified, and tweaked or extended as needed for minimal disruption to the working system.
The problem of security policy verification includes mechanisms for auditing the computer system to determine the degree to which its behavior adheres to the security policy that was specified by the entities. Existing approaches do not include a behavior-based analysis toolkit for independently verifying the interlock between a security policy specification and the running system, or for iteratively developing and deploying a security policy based on the observed effects of various alternative configurations. This is suboptimal as it may allow for incorrect or misunderstood implementations of security policy enforcement mechanisms.
Existing approaches include, for example, U.S. Pat. No. 7,016,980 entitled “Method and Apparatus for Analyzing One or More Firewalls,” which includes analyzing the operation of one or more network gateways, such as firewalls or routers that perform a packet filtering function in a network environment. However, this approach provides an incomplete solution to the security policy discovery problem.
Existing approaches also include, for example, U.S. Published Application No. US 20060206935 entitled “Apparatus and Method for Adaptively Preventing Attacks,” which includes adaptively preventing attacks which can reduce false positives and negatives for abnormal traffic and can adaptively deal with unknown attacks.
Additionally, existing approaches include, for example, U.S. Pat. No. 7,185,367 entitled “Method and system for establishing normal software system behavior and departures from normal behavior,” which includes detecting abnormal activity of a software system based on behavioral information obtained from an instrumented computer program while it executes.