Given a hypothesis to evaluate, Bayesian inferences are used to enable reasoning about the hypothesis utilizing prior knowledge based on a set of tests on the hypothesis. In Bayesian inferences, a hypothesis is a proposition whose truth or falsity is utilized to determine an answer to a problem or a problem set. By iteratively running tests and applying the result of the test to the prior knowledge, an answer to the hypothesis, in the form of a probability of truth or falsity, can be determined. For a hypothesis A and evidence B, the probability of B given A is:
      P    ⁡          (              A        |        B            )        =                    P        ⁡                  (                      B            |            A                    )                    ⁢              P        ⁡                  (          A          )                            P      ⁡              (        B        )            Where P(A) is the initial degree of belief in A, P(A|B) is the degree of belief in A having accounted for B, and P(B|A)/P(B) is the support B provides for A. P(A) is commonly known as the prior and P(A|B) is commonly known as the posterior.
It is possible to calculate P(A|B) by running every test for a hypothesis or hypotheses, however, in practice it is too time and resource intensive to run every test. One process commonly used to determine an answer to a hypothesis or hypotheses using a subset of the available tests is generalized binary search (GBS). In GBS, tests are greedily selected to maximize the value of the information obtained by running the tests, as measured by the information gain criteria. Information gain is a formal measure of the utility of the information obtained from a test. In this way, fewer than all tests can be run in order to determine the likelihood of a hypothesis or hypotheses.