1. Field of the Invention
The present invention relates to a method for analysis of competing hypotheses. In particular, although not exclusively, the invention provides a method for the analysis of intelligence, wherein intelligence deals with all the things which should be known in advance of initiating a course of action.
2. Discussion of the Background Art
Intelligence analysis is a complicated task that requires a high degree of analytical judgement under conditions of considerable uncertainty. This judgement is used to fill in the gaps in knowledge and is the analyst's principal means of managing uncertainty. Much of intelligence analysis includes judging the relevance and the value of evidence to determine the likelihood of competing hypotheses.
Intelligence is a difficult term to define precisely, yet its role and importance can be both intuitively understood and appreciated. From one perspective it may be seen as an end product, i.e. ‘information’ that is used to enhance or aid a decision making process. From another perspective, it may refer to the process that is applied to information, in order to transform it into a more useful product. More importantly than what intelligence is perhaps, is what intelligence does. Intelligence, both as a product and a process, is a means by which better decisions can be made, based on an increased understanding of likely courses of action, their influences and their consequences.
In everyday personal affairs few of our decisions use any directed analytical processes, and even fewer of these require any sort of rigorous approach. Arguably this is due to relatively minor consequences of the decisions we face in such everyday affairs. The same is not true for large-scale human affairs, such as the business of nations and corporations, where the complexity of the environment and the relative consequences of decisions can have enormous impact on the well-being and survival of a nation's citizenry or a corporation's ability to compete. This distinction in and of itself is cause enough to consider whether human ‘everyday reasoning’ is robust and reliable enough for use in these larger contexts.
As it happens, humans typically systematically make substantive errors in reasoning due to problems of framing, resistance of mental models to change, risk aversion, limitations of short-term memory, and other cognitive and perceptual biases. This has severe implications for the process of intelligence analysis, and may lead to incorrect conclusions, especially in situations that appear familiar but which actually result in different outcomes; in situations where the gradual assimilation of information into established mental models results in the failure to detect ‘weak signals’ that should have triggered a major re-evaluation; and in situations where the complexity of the mental models are untenable due to human limitations of short-term memory.
When applied to the business of nation states, the consequences of intelligence failure can be disastrous, so much so that the recorded history of the world, both ancient and modern, is replete with a litany of devastating intelligence failures too numerous to list. Examples of these failure are easily found in any period of history, such as the failure of the United States during World War 2 to perceive an impending attack on Pearl Harbour and the failure of Japan to reason that Midway Island was a trap, with the consequent sinking of four Japanese aircraft carriers and the loss of all crews, aircrews and aircraft. It is therefore foolhardy to believe that good intelligence can be developed by relying solely on human cognition without resort to products, methodologies or frameworks that attempt to augment human cognition while also mitigating its defects.
Intelligence analysis generally requires that analysts choose from among several alternative hypotheses in order to present the most plausible of these as likely explanations or outcomes for the evidence being analyzed. Analysts that do not use some form of rigorous methodology will often work intuitively to identify what they believe to be the most likely explanation and then work backwards, using a satisficing approach where the ‘correct’ explanation is the first one that is consistent with the evidence. The major downfall of the satisficing approach is that there may be more than one explanation that is consistent with the evidence, and unless the analyst evaluates every reasonable alternative, they may arrive at an incorrect conclusion.
Other common problems with using this strategy include the failure to generate appropriate alternative hypotheses; the propensity to filter and interpret the evidence to support the conclusions; and the failure to consider the diagnosticity of the evidence and how well it differentiates between hypotheses. The recognition of these problems with their disastrous consequences has led to the development of Alternative Analysis techniques that are widely employed, for example, within the intelligence services, see R. J. Heuer, Psychology of Intelligence Analysis. Washington, D.C.: Central Intelligence Agency Center for the Study of Intelligence, 1999. [Online] Available: http://www.cia.gov/csi/books/19104.
Other strategies that are less commonly used in intelligence analysis and are also ineffective are discussed in detail by George, A.; Presidential Decisionmaking in Foreign Policy: The Effective Use of Information and Advice. Boulder Colo., USA: Westview Press, 1980. Many alternative analysis techniques attempt to address the problems of fixed mind-sets and incomplete generation of alternative hypotheses, while still others attempt to address the problems of reasoning about the alternative hypotheses, such as R. Z. George, Fixing the problem of analytical mind-sets: Alternative analysis, International Journal of Intelligence and Counter Intelligence, vol. 17, no. 3, pp. 385-405, Fall 2004.
One way in which some of the problems of reasoning about alternative hypotheses could be addressed is to require the analyst to simultaneously evaluate all reasonable hypotheses and reach conclusions about their relative likelihood, based on the evidence provided. However, simultaneous evaluation of all non-trivial problems is a near-impossible feat for human cognition alone. Recent research suggests the number of individual variables we can mentally handle while trying to solve a problem is relatively small, four (4) variables are difficult, while five (5) are nearly impossible according to G. S. Halford, R. Baker, J. E. McCredden, and J. D. Bain, How many variables can humans process? Psychological Science, vol. 16, no. 1, pp. 70-76, January 2005. The Analysis of Competing Hypotheses (ACH) Heuer op cit was developed to provide a framework for assisted reasoning that would help overcome these limitations.
Heuer's Analysis of Competing Hypotheses (ACH)
The ACH methodology was developed in the mid- to late-1970's by Richard Heuer, a former CIA Directorate of Intelligence methodology specialist, in response to his never-ending quest for better analysis. The ACH methodology is still considered to be highly relevant today, see F. J. Stech and C. Elässer, Midway revisited: Deception by analysis of competing hypothesis, MITRE Corporation, Tech. Rep., 2004. [Online] Available: http://www.mitre.org/work/tech papers/tech papers 04/stech deception. ACH typically consists of the following eight steps:                1) Identify the possible hypotheses to be considered, for example use a group of analysts with different perspectives to brainstorm the possibilities.        2) Make a list of significant evidence and arguments for and against each hypothesis.        3) Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the “diagnosticity” of the evidence and arguments—that is, identify which items are most helpful in judging the relative likelihood of the hypotheses.        4) Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.        5) Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove the hypotheses rather than prove them.        6) Analyze how sensitive your conclusion is to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.        7) Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.        8) Identify milestones for future observation that may indicate events are taking a different course than expected.These eight steps are intended to provide a basic framework for identification of assumptions, arguments and hypotheses; consideration of all evidence and hypotheses, including its value relative to the hypotheses; a method of disconfirmation for identifying the most likely hypotheses; an approach to reporting the results of the analysis; and an approach to detecting future changes in the outcomes.        
In simple terms, ACH requires the analyst to simultaneously evaluate all reasonable hypotheses and reach conclusions about their relative likelihood, based on the evidence provided. Heuer acknowledges that while this holistic approach will not always yield the right answer, it does provide some protection against cognitive biases and limitations. Of particular interest is step 5, which requires the analyst to draw tentative conclusions about the likelihood of each hypothesis. It has been argued that ACH recommends analysts consider the likelihood of each hypothesis h given the assertion of each item of evidence, e, i.e. p(h|e). However, this can reasonably be interpreted to mean that the negation of each item of evidence, not e should also be considered (p(h|not e)). Consideration of counterfactuals has the advantage that the model can be constructed independently of known facts and continually evaluated if the value of the evidence changes over time.
The difference in interpretation lies in whether the evidence with respect to the hypotheses is considered a priori or a posteori. Evidence can be constructed a posteori by the analyst from the ‘facts at hand’, where the evidence has already been measured and valued, rather than from a general examination of the possible signs for each hypothesis. While examination of available data is usually relevant, ‘hidden facts’, i.e. conditions which are not observable, or conditions which have not yet taken place, are also likely to be relevant to the analysis. If reasoning is conducted a priori, then the value of the evidence is uncertain, and the analyst is more likely to consider the consequences of it being false as well as the consequences of it being true. If the reasoning is a posteori, the analyst may know whether the evidence is true or false, and not consider its counterfactual to be relevant in determining the likelihood of the hypothesis. This is a mistake, since the analysis model will no longer be relevant if the value of the evidence changes, or there is uncertainty about its value.
Richard Heuer points out that analysts should interpret ‘evidence’ in its broadest sense and not limit oneself just to current intelligence reporting. Indeed ACH is able to model the absence of evidence as well as its presence, and when done diligently presents no conceptual problem. However, ACH does not require analysts to consider both the assertion and negation of evidence, and this deficiency may lead them to frame the problem in terms of a single view of evidence, which often leads to incorrect conclusions, especially if deception or denial is being undertaken by an adversary.
Analysis of Competing Hypotheses—Counter Deception (ACH-CD)
ACH-CD was developed by Frank Stech and Christopher Elässer of the MITRE Corporation as a modified variant of ACH to account for cognitive factors that make people poor at detecting deception: Stech & Elässer op cit. The authors correctly argue that the use of ACH can lead to greater susceptibility for deception, especially when reasoning about a single view of evidence, i.e. the likelihood of each hypothesis given the assertion of the evidence p(h|e). Their argument is that this type of reasoning neglects the base rates both of the evidence br(e) and of the hypothesis br(h) which can result in reasoning errors that lead to incorrect conclusions, see K. Burns, Mental Models and Normal Errors. Lawrence Erlbaum Associates, 2004, ch. How Professionals Make Decisions. [Online] Available: http://mentalmodels.mitre.org/Contents/NDM5_Chapter.pdf. More correctly it should be said that reasoning using only one of the logical conditionals (usually the positive conditional, p(h|e)) is more likely to produce reasoning flaws than when both are considered. Stech and Elässer make the same point when they argue that analysts' judgements are more susceptible to deception if they also do not take the false positive rate of the evidence into account. An example of this susceptibility to deception is provided by Stech and Elässer is how the reasoning about the detection of Krypton gas in a middle-eastern country can lead to the erroneous conclusion that the country in question likely has a nuclear enrichment program. For clarity, their example has been reproduced below:Detect Kryptonp(enrichment|Krypton)=high→p(enrichment program)=high→p(nuclear program)=highThey argue that the main problem with this reasoning is that it does not consider that Krypton gas is also used to test pipelines for leaks, and that being a middle-eastern country with oil pipelines, the probability of the gas being used outside of a nuclear program is also fairly high, i.e. p(Krypton|not enrichment)=medium to high. This additional information should lead the analyst to the conclusion that there is a fair amount of uncertainty of a nuclear program given the detection of Krypton. The assignment of the ‘high’ value to p(enrichment|Krypton) neglects the fact that an oil-rich middle-eastern country is likely to use Krypton gas, regardless of whether they have a nuclear program.
However, it can be argued that Stech and Elässer have interpreted Step 5 of ACH more narrowly than perhaps was intended. Heuer makes no claim about which of p(h|e) or p(e|h), and their corresponding counterfactuals p(h| not e), p(e| not h), should be used. Heuer describes the process in such general terms as to be consistent with either interpretation, although consideration of counterfactuals is essential if basic reasoning errors are to be avoided. In any event, it can be shown that p(h|e) and p(h|not e) can be derived from knowledge of p(e|h), p(e|not h) and the base rate of the hypothesis br(h). Therefore the choice of which logical conditionals to use is less important then the soundness of the belief values assigned to them. The choice of logical conditionals becomes more important when the analyst considers whether the evidence is causal in nature with respect to the hypotheses, or is merely derivative. The problem of framing with respect to the causal or derivative nature of evidence and the implications for reasoning is discussed further below.