Steam and gas turbines are complex technical systems that have a multiplicity of (e.g. several hundred) sensors, each of which possibly provides a plurality of measured values per second.
For the purpose of monitoring and controlling the turbine, the data obtained from the sensors are processed, analyzed and interpreted. It is thus possible to identify deviations from a prescribed normal state as early as possible and if need be to prevent damage to and/or failure of the turbine.
In this case, the volume of data and the complexity of possible dependencies between the data are usually far too great for effective analysis of the data by an operating person to be possible.
Model-based information interpretation (and the application thereof within the framework of model-based diagnosis) is becoming increasingly important. In this context, model-based methods have the advantage of an explicit and comprehensible description of the domain (e.g. of the technical system requiring a diagnosis). Such an explicit model can be examined and understood, which promotes acceptance by the user, particularly in respect of a diagnosis or an interpretation result. In addition, the models can be customized for new machines, extended by new domain knowledge and, depending on the type of presentation, even checked for correctness with reasonable effort. It is also possible to use a vocabulary of the model for man-machine interaction and hence for implementing an interactive interpretation process.
In the case of a logic-based representation of the domain model, the interpretation process is frequently implemented by means of what is known as (logic-based) abduction. This is an attempt to explain the observed information (such as sensor measurements and results from preprocessing processes) by using a formal model. In this context, allowance is made for the fact that the set of observations (e.g. owing to measurement inaccuracies, absence of sensors, etc.) is often incomplete by being able to assume missing information during an explanatory process. In formal terms, the object is thus to determine, for a given model T (also called the “theory”) and a set of observations O, a set A of assumptions (usually as a subset A⊂A from all possible assumptions A) such that the observations O are explained by the model T and also the assumptions A⊂A. In this case, the problem is worded as an optimization problem, i.e. the “best” such set A⊂A of assumptions is sought (according to the optimality criterion, e.g. the smallest set, or the set with the lowest weight).
In the practice of automatic information interpretation and/or diagnosis, there is—besides the problem of missing observations—also the problem that observations exist that cannot be explained with the given model. Typical causes of this are, by way of example, faulty sensors that deliver measured values outside an envisaged range, or else incomplete models that do not take account of at least one arising combination of observations. Such problems clearly restrict the practical usability of abduction-based information interpretation.