With the increased usage of computing networks, such as the Internet, people are inundated with the amount of structured and unstructured information available from various sources. Information gaps abound as users search for information on various subjects and try to piece together what they find and what they believe to be relevant. To assist with such searches, knowledge management systems have been developed which take an input, analyze it, and return results indicative of the most probable results to the input. These question answering (QA) systems provide automated mechanisms for searching through a knowledge base with numerous sources of content, e.g., electronic documents, and analyze them to determine a result and a confidence measure as to how accurate the result is in relation to the input.
QA systems are built on technology used for hypothesis generation, massive evidence gathering, analysis, and scoring. The QA system takes an input question, analyzes it, decomposes the question into constituent parts, generates one or more hypothesis based on both the decomposed question and the results of a primary search of answer sources, performs hypothesis and evidence scoring based on a retrieval of evidence from evidence sources, performs synthesis of the one or more hypothesis, and based on trained models, performs a final merging and ranking to output an answer to the input question along with a confidence measure.
One challenge of QA systems is answer stability. While a QA system provides answers to questions, it does not indicate the stability of the answer nor when that answer might change. Presently, answer stability can be found by repeatedly asking the QA system the same question and monitor for changes in the answers over time. This is time and resource intensive.