Bayesian Networks (BNs) have been widely used in software applications for purposes of making decisions by electronically modeling decision processes. Some example applications that are particularly well suited for BNs include artificial intelligence applications, speech recognition, visual tracking, pattern recognition, and the like. BN is often viewed as a foundation for probabilistic computing.
A BN is based on Bayesian logic, which is generally applied to automated decision making and inferential statistics that deals with probability inference. A static BN differs from a Dynamic BN (DBN) in that the DBN can adjust itself over time for stochastic (probabilistic) variables. However, some decision processes that are not likely to evolve over time are better suited to a BN implementation. Moreover, a DBN includes additional data structures and processing that may make some decision processes incapable of being efficiently modeled within a DBN. Correspondingly, based on the decision process being modeled a BN may be more favorably used over a DBN. Both BN and DBN applications model a decision-making process as a decision tree where each node of that tree identifies a particular decision state, and each decision state (node) can itself be a tree data structure.
A Most Probable Explanation (MPE) is a decision state sequence (path) through a BN for a given problem having observed outputs (evidences). In other words, if a result is known, the MPE is the states of all hidden nodes in the BN that most likely explain the observed evidence. Once a BN has been trained or used for a few different decisions, the BN can be used to produce its own decision based on observable evidence for a given problem or used to evaluate different observations.
Conventional MPE generating algorithms are produced by a technique that requires two complete passes on the BN. In the first pass all cliques' potentials are assigned and evidences are collected from the leaves of the junction tree, which is derived from the BN, to the root of the junction tree. Then the maximum element of the root clique is selected and stored and all other elements in the root clique are set to zero. During the second pass (referred to as the distribute evidence processing step), the junction tree is iterated from its root potential back to all the leaf cliques. During this second pass, evidence is redistributed based on each state (clique) of the junction tree being evaluated, and selective maximum potentials for each state are retained, such that when the second pass is completed, and MPE is produced. As is apparent, conventional techniques for generating a MPE are processor and memory intensive.
Therefore, there is a need for improved MPE generation.