In recent years, many solutions have been proposed for different problems from diverse fields using neural networks. The dynamical stability problem has not been so important in some neural network applications, such as classification, pattern recognition, combinatorial optimization, system identification, prediction, and the like. A simple gradient method using a non-increasing weight energy function is widely applied to these types of neural network applications for the purpose of learning the network weights. It is well known in the gradient type of optimization that the performance of the update rules varies according to the error surface of to be minimized. Depending upon the initialization method, those parameters may be stuck in local minima or result in an oscillatory behavior. Thus, a slow convergence is a very typical behavior in this method which typically does not cause problems in the above-identified applications for neural networks. However, such a method is not suitable for on-line feedback control applications.
In early neural network feedback control applications for dynamical systems, one of the most common neural network structures with a gradient descent method, known as the back propagation network, was used without concern for stability. Then, K. S. Narendra and K. Parthasarathy, "Identification and Control of Dynamical Systems Using Neural Networks," IEEE Trans. Automat. Control, 1, 4-27, 1990, showed that the convergence of such dynamical back propagation networks was naturally slow, but that stability of such networks could not be proven.
Off-line trained neural networks are generally not suitable for feedback control systems. Even after an obscure training phase, if a state vector, for some reason, goes outside of the compact region in which is it trained, further time-consuming training may be necessary. This situation is not so important for some open-loop applications (such as those mentioned above), though it may easily cause instability in dynamical feedback control systems. Thus, it is important to design a neural network controller which can learn on-line and adapt quickly to a changing environment, while preserving control stability.
There exists several on-line neural network controllers that are stable using a one-layer neural network (also known as a linear-in-the-parameters neural network). When linearity in the parameters holds, the rigorous results of adaptive control become applicable for the neural network weight tuning, and eventually result in a stable closed-loop system. However, the same is not true for a multilayer neural network, where the unknown parameters go through a nonlinear activation function. Such a multilayer neural network offers not only a more general case than the one-layer neural network, thus permitting applications to a much larger class of control systems, but also avoids some limitations, such as defining a basis function set or choosing some centers and variations of radial basis-type activation functions.
One proposed method to control a large class of nonlinear systems is to map nonlinear systems to linear systems. Controlling nonlinear systems by "feedback linearization" is presently focused around geometric techniques. However, applicability of these techniques is quite limited because these techniques rely on exact knowledge of the nonlinearities. In order to relax some of the exact model-matching restrictions, several adaptive schemes have been introduced that tolerate some linear parametric uncertainties. See for example, G. Campion and G. Bastin, "Indirect Adaptive State Feedback Control of Linearly Parameterized Nonlinear Systems," Int. J. Control Signal Proc., vol. 4 (1990); D. G. Taylor et al., "Adaptive Regulation of Nonlinear Systems with Unmodeled Dynamics," IEEE Trans. Automat. Control, 34:405-412 (1989); R. Marino and P. Tomei, "Adaptive Output-Feedback Control of Nonlinear Systems, Part II: Nonlinear Parameterization," IEEE Trans. Automat. Control, vol. 38, (1993).
Unlike some open-loop neural network applications, in feedback control systems it must be shown that not only are the neural network weights bounded, but also that the inputs, outputs and states remain bounded. A general control structure for feedback linearization can be given by ##EQU1##
When any adaptive scheme is employed to compute the denominator part of the controller with D(x,.crclbar.), then D must be bounded away from zero for all time. This type of controller will be referred to as a well-defined controller. This feedback linearization problem is far from trivial, and as a result, existing solutions to the control problem are usually given locally and/or assume additional prior knowledge about the system. The same difficulties appear in neural network control systems, which can be categorized as nonlinear adaptive systems.