Process models serve for approximating, analyzing, and optimizing of industrial processes. Such processes of process steps may be performed by an entire production line, by a few system components cooperating as an aggregate, or even by individual system components of a production line. The basic function of such process models is to provide on the basis of input parameters, output values that can be expected or predicted on the basis of the input parameters. The output values may be used in a closed loop or positive feedback control for influencing or fully controlling the industrial process or production.
Conventional process models are based on a linear formulation and are usually analyzed by means of known statistical methods. However, by using conventional linear formulations, it is not possible to satisfactorily model complex processes having non-linear characteristics. Thus, for modeling non-linear processes it is customary to use non-linear models such as neural networks which are capable of mapping or displaying complex non-linear response characteristics.
In order to identify the parameters of such non-linear models it is necessary to use non-linear optimizing algorithms which require an extremely high computer investment expense and effort, particularly during the so-called learning phase. Another particular difficulty is encountered in the formation of the neural network structure, such as the selection of the number of the individual neural cells to be incorporated into the network and to select the internetting connections of these neural cells within the network.
For further background information reference is made to the following publications, the content of which is incorporated herein by reference.
(A) John Moody and Christian J. Darken,
"Fast Learning in Networks of Locally Tuned Processing Units", published in: Neural Computation, Vol. 1, pages 281 to 294, published in 1989 by MIT, with regard to the "Radial Basis Functions Method" applying Gauss-functions;
(B) Mark J. L. Orr,
"Regularization in the Selection of Radial Basis Function Centres", Neural Computation, Vol. 7, No. 3, pages 606-623, published in 1995 by MIT, with regard to the "Stepwise Regression Method";
(C) S. Chen, C. F. N. Cowan, and P. M. Grant
"Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks", IEEE Transactions on Neural Networks, Vol. 2, No. 2, pages 302 to 309; publication date: Mar. 2, 1991, with regard to the "Forward Selection Method".
(D) G. Deco and D. Obradovic,
"An Information-Theoretic Approach to Neural Computing", Publisher: Springer Verlag, 1996, with regard to the variation of the selection criterium as an estimate of the expected generalized error.