Computers and networks of computers are used by many businesses and other organizations to enable employees and other authorized users to access and exchange information. Often, computers that are connected to a local area network communicate with other computers that are not connected the network, such as by modem or other device via the Internet. In such cases, the local area network may be vulnerable to attacks by unauthorized users, who may be able to gain unauthorized access to files stored on computers on the local area network over a communication port of the computer communicating outside of the local area network.
As referred to herein, a “computer configuration” refers to a computing device, a networked computing device, components of a networked computing device, and/or hardware or software subsystems that make up a component of a networked computing device. Examples of computer networks include the Internet, Filter Distributed Data Interface (“FDDI”), and a token ring network, as known to one skilled in the art. A computing device may be a large-scaled (“mainframe”) computer system, a mid-sized (“mini”) computer, a personal computer, or any smaller processing device such as a handheld computer, a personal digital assistant (“PDA”), a cellular telephone, or the like. A computer configuration can include, for example, routers, switches, workstations, personal computers and printers. particular hardware types, operating systems, and application programs.
Two useful and known prior art systems that provide security against such attacks are vulnerability assessment systems and intrusion detection systems.
Vulnerability assessment systems detect weaknesses in a computer configuration or a computer network that could lead to unauthorized uses and associated exploits, collectively referred to herein generally, as vulnerabilities. Vulnerability assessment systems can be highly complex because the vulnerabilities associated with any given network can depend upon a version and configuration of the network, as well as upon the respective devices and subsystems coupled to the network. Additionally, networks can have vulnerabilities that arise from a single aspect of the system, and/or vulnerabilities that arise as a result of the interaction of multiple aspects of the system.
Current vulnerability assessment tools, typically, are single vendor solutions that address a single aspect of system vulnerability. These tools tend to fall into one of three types, each of which is briefly described below.
A first known type of vulnerability assessment tools relates to a database that documents particular vulnerabilities, and attempts to repair known vulnerabilities. Tools of this type are, typically, vendor-dependent for database updates, and also require that new product versions be installed or maintained, such as via a subscription service. Examples from this category include INTERNET SECURITY SYSTEMS' INTERNET SCANNER, NETWORK ASSOCIATES, INC.'s CYBERCOP and HARRIS' SECURITY THREAT AVOIDANCE TECHNOLOGY (“STAT”).
A second known type of vulnerability assessment tools uses various data parameters to calculate a risk indicator. An example of this tool category is LOS ALAMOS VULNERABILITY ASSESSMENT (LAVA) tool. Unfortunately, these tools are difficult to maintain and keep current due to rapidly evolving threats and changing technology environments.
A third known type of vulnerability assessment tools examines a particular aspect of the system, such as the operating system or database management system, but ignores the other system components. An example of this tool is SYSTEM ADMINISTRATOR TOOL FOR ANALYZING NETWORKS (“SATAN”), which analyzes operating system vulnerabilities, but ignores infrastructure components such as routers.
In view of the above-identified shortcomings of the three respective types of vulnerability assessment tools, it is believed by the inventor that a plurality of these types of vulnerability assessment tools operating together would be preferable. Unfortunately, using multiple tools from a variety of vendors for a single computer network analysis is labor-intensive. For example, a security engineer must enter a description or representation of the configuration or network multiple times in various formats. The security engineer must also manually analyze, consolidate and/or merge outputs resulting from these disparate tools into a single report that describes the configuration's or network's security posture. Thereafter, the security engineer must complete a risk analysis (e.g., calculating expected annual loss, surveying controls, etc.), and repeat the entire process in order to analyze alternatives in view of the assessed security risks, system performance, mission functionality and/or available budget.
Another difficulty that stems from vulnerability assessment tools is the highly dynamic nature of a network environment. Devices of a known and/or unknown type can be added to and/or removed from a network essentially at any time. Additionally, different versions and types of subsystems can be introduced to a network. Each change or upgrade to a network increases a potential for new or changed vulnerabilities to exist on that network.
Current conventional systems that attempt to assess the vulnerability of computer systems are believed to be deficient for a variety of other reasons. For example, COMPUTER ORACLE AND PASSWORD SYSTEM (“COPS”) is designed to probe for vulnerabilities on a host system. Unfortunately, COPS does not maintain information across an entire network and predicts vulnerabilities only on a single host. Other conventional systems, such as SATAN and INTERNET SECURITY SCANNER (“ISS”), scan computer systems for vulnerabilities by actively probing, analyzing collected data for vulnerabilities, and displaying the results. However, several disadvantages are associated with these products. In one example, data collection and analysis are implemented as a single process, which creates a prohibitively time-consuming process. Furthermore, SATAN AND ISS are essentially static applications, in that they do no learn over time as data collection and analysis occurs.
In addition to vulnerability assessment, intrusion detection involves the detection of unauthorized uses and exploits either in real-time (as they occur) or thereafter. Intrusion detection, however, is often compared to finding a needle in a haystack. The process involves generating extremely large amounts of data by network monitoring utilities, and identifying illegal or otherwise unlawful activities that may be identifiable only by a few anomalous data packets.
Typically, conventional intrusion detection systems operate in real-time, as they are designed to alert an operator of an intrusion attack so that the operator can respond in a timely fashion and avert damage. Unfortunately, the speed with which attacks are currently executed rarely allows time for any meaningful response from the operator, leaving a network vulnerable to an intrusion.
Intrusion detection systems are, typically, of three types: anomaly detection systems, rule-based systems, and signature-based systems, and are each discussed below.
Anomaly detection systems look for statistically anomalous behavior on a network. Statistical scenarios can be implemented for user, dataset, and program usage to detect anomalous or otherwise exceptional use of the system. However, the assumption that computer misuses appear statistically anomalous has been proven unreliable. Anomaly detection techniques do not directly detect misuse, and, accordingly, do not always detect many actual misuses. For example, when recordings or scripts of known attacks and misuses are replayed on computers with statistical anomaly detection systems, few if any of these scripts are identified as anomalous. This occurs for a variety of reasons and, unfortunately, reduces the accuracy and usefulness of anomaly detection systems.
In general, therefore, anomaly detection techniques cannot detect particular instances of misuse unless the specific behaviors associated with those instances satisfy statistical tests (e.g., regarding network data traffic or computer system activity) without security relevance. Anomaly detection techniques also produce false alarms. Many, if not most, of the reported anomalies are purely coincidental statistical exceptions and do not reflect actual security problems. Accordingly, a threat of false alarms often causes system managers to resist using anomaly detection methods due to an increase in the processing system workload and the need for expert oversight, and without providing substantial benefits.
Another limitation associated with anomaly detection techniques is that user activities are often too varied for a single scenario, resulting in many inferred security events and associated false alarms. Also, statistical measures are not sensitive to the sequential order in which events occur, and this may prevent detection of serious security violations that exist when events occur in a particular order. Furthermore, scenarios that anomaly detection techniques use may themselves be vulnerable to conscious manipulation by users. For example, a knowledgeable perpetrator may train the adaptive threshold of a detection system over time to accept aberrant behaviors as normal. Furthermore, statistical techniques that anomaly detection systems use often require complicated mathematical calculations and, therefore, are usually computationally intensive.
A second type of intrusion detection system, a rule-based system, has been applied in misuse detection, and generally operates as a layer “on top” of an anomaly detection system (as known in the art) for interpreting reports of anomalous behavior. Rule-based systems attempt to detect intrusions by receiving surveillance data supplied by a security system installed on a computer and then applying the data to a set of rules to determine potential scenarios that relate to attacking the computer installation. Since the underlying model is anomaly detection, rule-based systems have similar drawbacks as other anomaly detection techniques. It is believed by the inventor that rule-based systems are not fully satisfactory, since only those intrusions that correspond to previously been stored attack-scenarios are detected.
A third type of intrusion detection system is a signature-based or pattern-detection mechanism. In this third type of system, a signature is referenced that represents a set of events and transitions/functions that define a sequence of actions that form an attack or misuse. In general, a signature mechanism uses network sensors to detect data traffic or audit trail records, which are typically generated by a computer's operating system. Typically, the designer of a product which incorporates the signature-based mechanism identifies or selects a plurality of events that together form the signature or the attack or misuse. Although the signature-based mechanism goes a step beyond rule-based systems, a signature-based system is similar to a rule-based system in that it relies upon signatures or rules.
Importantly, intrusion detection methods in use today are plagued by false positive events, as well as by an inability to detect the early stages of a network attack. This is partly because conventional intrusion detection techniques are based on specialized equipment that is located at a customer's premise, and, accordingly may not determine a hacker's activities over a broader scale. Furthermore, the after-the-fact character of detection systems is dominated by forensic tools, i.e., utilities that are designed to help a computer security expert analyze what happened on a compromised host. Data are extracted that have been established as relevant to known attacks. This is believed by the inventor to be not helpful to prevent a system from becoming compromised and/or damaged.
Other shortfalls of existing intrusion detection systems include failing to utilize useful sources of data, producing large amounts of information that are difficult for a human to analyze in a timely fashion, being overly complex and difficult to use, and being designed to assist with system administration rather than attack diagnosis.
Thus, as described above, automatic vulnerability assessment systems and intrusion detection systems are time-consuming and complex. Vulnerability assessment systems typically include testing a machine's profile against a database of known vulnerabilities. Often, software updates, known in the art as “patches,” are installed to ensure the machine remains protected against newly discovered vulnerabilities. However, installing patches on a machine does not guarantee that a specific machine is invulnerable. Merely installing patches lacks the ability to discover weaknesses in a specific machine's or network's configuration. In contrast, human vulnerability assessment experts can tailor their investigations to a specific configuration, but automatic assessments cannot.
Furthermore, known vulnerability assessment and intrusion detection problems require human-level intelligence to solve. Accordingly, researchers have attempted to apply artificial intelligence techniques to them. Although expert system and machine learning approaches in intrusion detection have been attempted with some success and a variety of approaches have been tried, no comprehensive effort has been made in the prior art to use human-level reasoning and learning capabilities in order to construct intelligent vulnerability assessment or intrusion detection systems.