Observation of a physical system for understanding its behavior requires obtaining access to key parameters (state variables) within the system. Often, these state variables are not directly available for observation so that they must be inferred from indirect and noisy measurements. Optimal linear estimation theory has been developed for estimating these state variables by producing a minimal error estimate from their contaminated images. The need for optimal estimation technology is widespread and includes such diverse applications as monitoring system behavior in hostile environments, estimating system parameters for system modeling, estimating (detecting) messages in communication systems, remote measuring systems, and controlling of physical systems.
Optimal estimation techniques are based on statistical signal processing (filtering) techniques for extracting a statistical estimate of the desired state variables from inaccurate and distorted measurements by minimizing a prescribed error function. The form of the error function determines the nature of the estimator optimality.
Kalman filtering (Kalman, R. E., "A New Approach to Linear Filtering and Prediction Problems", Trans. ASME, J. Basic Eng., Vol. 82D, pp. 34-45, March 1960) is an optimal filtering technique commonly used for estimating state variables of a linear system. Kalman filtering is a time domain operation that is suitable for use in estimating the state variables of linear time-varying systems that can be described by a set of linear differential equations with time-varying coefficients (linear differential equations with constant coefficients being a special case). Although Kalman filtering has found application in state estimation in many systems that may be approximately described as linear, the basic Kalman filter technique can not adequately accommodate general nonlinear systems. Because nonlinear systems are common, attempts have been made to adapt the Kalman filter to estimation of states in nonlinear systems by quasi-linearization techniques. These adaptations, when restricted to computationally feasible methods, result in sub-optimal estimators which do not yield a minimal (optimal) error estimation.
Because it is desirable to use digital computers for applying Kalman filter techniques, discrete-time (time-sampled) adaptations have been developed. Discrete Kalman filtering is ideally suited for estimation of states in discrete-time systems that can be properly described by a set of finite difference equations with discrete time-varying coefficients. However, because of the strong incentives for incorporating digital signal processing techniques for continuous-time Kalman filter estimation, extensive use has been made of discrete Kalman filters in continuous-time system state estimation.
Because Kalman filters (and other state variable estimators, such as the Luenberger observer (Luenberger, D. G., "Observing the State of a Linear System", IEEE Trans. On Military Electronics, pp. 74-80, April 1964)) are based on a model of a system whose states are to be estimated, the use of a discrete Kalman filter for estimating the states of a continuous system implies modeling the continuous system as a discrete time-sampled system in which integrators are replaced by accumulators and continuous time-varying coefficients are replaced by discrete time-varying coefficients. In addition to propagating the state estimates, estimators may propagate state estimate error covariance using a model that may be a different approximation to the original continuous system model. As the time interval between sampled data points is increased, the approximation error for each model increases and may result in model behaviors that departs drastically from the actual system behavior.
When the effects of modeling continuous time systems by using discrete-time models are combined with the inaccuracies of quasi-linear model approximations to nonlinear systems, estimator error stability can be severely impaired. Loss of stability means severe departure between the actual state values and the estimates. Increasing the sampling rate of the model produces smaller finite difference intervals with improved performance but often at an unacceptable cost for the increased computational burden.
Optimal state estimators for linear systems form the basis for optimal control in the so-called Linear Quadratic Gaussian (LQG) control problem (Reference: Applied Optimal Control, A. E. Bryson and Y. C. Ho, John Wiley & Sons, 1975). System state variables are estimated using optimal estimation techniques and then a quadratic objective performance criterion is applied to establish a control design strategy. Kalman filtering is commonly used as a means for estimating the state variables of a system. When the estimated state variables are combined, using the control law based on the objective performance criterion (or performance index), optimal LQG control system results.
The objective performance criterion summarizes the objectives of the system by the performance index, J, which is a mathematical expression that must be minimized in order to meet the objectives of the system. For example, the performance index can represent the final error, total time, or the energy consumed in meeting the system objective. The various possible performance indices may be used individually or in combination so that more than one objective is satisfied.
FIG. 1 is a block diagram of a basic state variable control system 10 in which the physical plant 11 that is to be controlled has a vector (multichannel) input, u(t), and a state vector, x(t), with vector elements corresponding to the set of state variables representing the attributes of plant 11 that are to be controlled. State vector x(t) is generally not directly accessible but through the measurement vector, z(t), which are available from a set of appropriate sensors in sensor unit 12. The measurement vector may have more or less elements than the state vector. These measurements can also be contaminated by measurement noise due to the system environment and/or due to sensor noise. Typically, the sensors are transducers that convert the various physical elements of the state vector representing diverse physical activity (e.g. heat, light, mass, velocity, etc.) into electrical signal representations suitable for additional signal processing. State observer 13 accepts vector z(t) as a noisy and distorted representation of the state vector x(t) from which an estimated state vector, x(t), of the state vector is made. The estimated state vector, x(t), is a statistical estimate of necessity because of the stochastic nature of the random noise introduced into the measurement process and into the dynamics. The estimated state vector, x(t), is then used by controller (-C(t)) 14 to form a suitable input vector, u(t), to drive plant 11. Because of the stochastic nature of real control systems, the performance index, J, must be expressed as an average value, J=E{J}, where E{.} is the expectation operator. In order to accommodate time-varying plants, i.e. plants that change physical attributes as a function of time, controller 14 may also be a time-varying controller based upon a time-varying performance index, J(t), and a time varying state observer 13.
FIG. 2 is a signal flow block diagram representing a plant such as plant 11 of FIG. 1. Summing junction 111 provides at its output vector x(t), the time derivative of state vector x(t) that is the sum of three inputs to summing junction 111: the output vector F(t)x(t) of feedback gain unit 112, an input signal .GAMMA.(t)u(t) from gain unit 116 and an input noise signal G(t)w(t) from gain unit 114. The input noise, w(t), is generally assumed to be white gaussian noise of zero mean. The state vector time-derivative, x(t), applied to vector integrator 110 for producing the state vector, x(t), which is applied to vector feedback gain unit 112 and to vector gain unit 113 of measurement system 120. The output of gain unit 113, y(t), is combined with a white noise vector, v(t), by vector summing junction 115 for producing output measurement vector z(t). The system satisfies the following differential equations: EQU x(t)=F(t)x(t)+.GAMMA.(t)+G(t)w(t) EQU y(t)=H(t)x(t) (1) EQU z(t)=y(t)+v(t)
where w(t) and v(t) are normal zero mean vectors with covariance matrices Q(t) and R(t), respectively, i.e. EQU w(t).apprxeq.N(0,Q(t).delta.(t)) EQU v(t).apprxeq.N(0,R(t).delta.(t)),
.delta.(t) is the Dirac delta function, and u(t) is a yet to be defined input control vector. Also, both noise vectors are usually assumed to be independent, i.e. E{w(t)v(t)}=0.
As shown in FIG. 3, the state observer 13 of FIG. 1 can be implemented as a Kalman filter that has identical elements with those identified as integrator 110, input gain unit 116, measurement gain unit 113, and state vector feedback gain unit 112, respectively corresponding to elements 130, 136, 133, and 132 in FIG. 3. The output, z(t), from sensor unit 12 of FIG. 1 is applied together with z(t), the output of feedback gain unit 133, to the input of vector adder 139 for forming the difference, (z(t)-z(t)), representing error in the Kalman filter estimate of the vector z(t). The difference is applied to Kalman gain unit 137 (for applying the Kalman gain matrix to the difference) and supplying its output to one input of vector adder 138. A second input to vector adder 138 is the output of vector gain unit 132 that forms the product F(t)x(t), so that the vector adder output represents an estimate, x(t), of the time derivative of the state vector. The Kalman filter signal flow block diagram of FIG. 3 supports the following vector differential equation: EQU x(t)=F(t)x(t)+.GAMMA.(t)u(t)+K(t)[z(t)-H(t)x(t)] (2)
and EQU x(0)=x.sub.0.
Equation (2) requires the solution of the following continuous Ricatti equation for P(t), the covariance matrix of the state vector, x(t), in order to solve for the Kalman gain matrix, K(t) in equation (2). EQU P(t)=F(t)P(t)+P(t)F.sup.T (t)+G(t)Q(t)G.sup.T (t)-K(t)R(t)K.sup.T (t)(3)
where [.].sup.T indicates the matrix transpose operation, P(0)=P.sub.0, and P(t)=E{(x(t)-x(t))(x(t)-x(t)).sup.T }. Thus, the covariance matrix P(t) is based on the difference between the actual state represented by vector x(t) and the estimated state represented by vector x(t).
The Kalman gain matrix, K(t), is EQU K(t)=P(t)H.sup.T (t)R.sup.-1 (t) (4)
Equations (2)-(4) specify the Kalman filter observer that is the optimal linear filter for estimating the minimum variance, conditional mean state vector of plant 11 of FIG. 1. Equations (1) describe the signal model used in constructing the Filter equations (2)-(4).
The control vector, u(t), may be considered to be an exogenous variable whose functional form can be prescribed for altering the dynamic properties of the plant. An optimal control vector obtains when u(t) is chosen to minimize a prescribed performance index, J, as previously discussed.
A quadratic performance index is commonly used that minimizes the weighted integrated quadratic error and the quadratic value (energy) of the control vector and the weighted quadratic error of the state vector at a designated final time, t.sub.f, as follows: ##EQU1## where V.sub.f, V(t), and U(t) are weighting matrices for establishing the relative importance of the three quadratic terms. Minimization of J is performed subject to the conditions implied by the plant differential equations (1) and a solution for u(t) is obtained in terms of the estimated state vector, x(t) as EQU u(t)=-C(t)x(t) (6)
where controller gain matrix C(t) is computed using the matrix Riccati equation for control and the certainty-equivalence principle. (Reference: Applied Optimal Control, A. E. Bryson and Y. C. Ho, John Wiley & Sons, 1975)
FIG. 4 is a block diagram of an optimal controller 15 controlling plant 11 by using measurements obtained by sensor unit 12. The optimal controller comprises a Kalman filter 13, for operating on the output of sensor unit 12 and producing an estimated state vector, and a controller gain matrix unit 14 that operates on the estimated state vector to produce the control vector for controlling the plant.
FIG. 5 is a signal flow block diagram of a discrete time sampled model 20 for a physical plant that is to be controlled. The most obvious difference between the continuous signal model of FIG. 2 is that integrator 110 has been replaced by time delay unit 200 that introduces a vector delay of T seconds. At time t=kT, the discrete state vector, x.sub.k, appears at the output of delay unit 200. Vector adder 201 accepts the output, F.sub.k x.sub.k, of matrix gain unit 203, control vector .GAMMA..sub.k u.sub.k from matrix gain unit 206, and the output, G.sub.k w.sub.k, of matrix gain unit 202 and presents as the vector sum, x.sub.k+1, corresponding to the state vector for t=(k+1)T. The measurement portion of the plant is represented by vector gain matrix unit 204, that operates on state vector x.sub.k to produce y.sub.k, and vector adder 205 that adds white noise vector v.sub.k to y.sub.k for producing output measurement vector z.sub.k. Noise vectors w.sub.k and v.sub.k are assumed to be independent discrete processes.
As a result the following set of finite difference equations, that are comparable to equations (1), apply to FIG. 5: EQU x.sub.k+1 =F.sub.k x.sub.k +.GAMMA..sub.k u.sub.k +G.sub.k w.sub.k EQU y.sub.k =H.sub.k x.sub.k EQU z.sub.k =y.sub.k +v.sub.k (7) EQU E{v.sub.k v.sub.1 }=R.sub.k .delta..sub.k1 EQU E{w.sub.k w.sub.1 }=Q.sub.k .delta..sub.k1
FIG. 6 is a block diagram of the discrete Kalman filter, analogous to the continuous Kalman filter of FIG. 3, that accepts the output measurement vector, z.sub.k, of the discrete time sampled plant model of FIG. 5 and produces a conditional estimated state vector, x.sub.k.vertline.k-1, at the output of vector time delay unit 200. The vector, x.sub.k.vertline.k-1, is fedback through vector gain unit 303 (F.sub.k -K.sub.k H.sub.k.sup.T) to vector adder 301 to be added to the output of Kalman matrix gain unit 302 and added to input control vector .GAMMA..sub.k u.sub.k from gain unit 206.
The following finite difference equations apply to the discrete Kalman filter of FIG. 6. EQU x.sub.k+1.vertline.k =(F.sub.k -K.sub.k H.sub.k.sup.T)x.sub.k.vertline.k-1 +.GAMMA..sub.k u.sub.k +K.sub.k z.sub.k,
where EQU x.sub.0.vertline.-1 =E{x.sub.0 }=x.sub.0 EQU K.sub.k =F.sub.k .SIGMA..sub.k.vertline.k-1 H.sub.k [H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 H.sub.k +R.sub.k ].sup.-1 EQU .SIGMA..sub.k+1.vertline.k =F.sub.k [.SIGMA..sub.k.vertline.k-1 -.SIGMA..sub.k.vertline.k-1 H.sub.k (H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 H.sub.k +R.sub.k).sup.-1 H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 ]F.sub.k.sup.T +G.sub.k Q.sub.k G.sub.k.sup.T(8 )
where EQU .SIGMA..sub.0.vertline.-1 =P(0)=P.sub.0,
and EQU x.sub.k.vertline.k =x.sub.k.vertline.k-1 +.SIGMA..sub.k.vertline.k-1 H.sub.k (H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 H.sub.k +R.sub.k).sup.-1 (z.sub.k -H.sub.k.sup.T x.sub.k.vertline.k-1) EQU .SIGMA..sub.k.vertline.k =.SIGMA..sub.k.vertline.k=1 +.SIGMA..sub.k.vertline.k-1 H.sub.k (H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 H.sub.k +R.sub.k).sup.-1 H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1
In the above expressions involve conditional vectors, x.sub.k.vertline.k-n, and covariance matrices, .SIGMA..sub.k.vertline.k-a for n=0 or 1, which should be interpreted to mean that they represent predicted values conditioned on the measured observations [z.sub.0 z.sub.1 . . . z.sub.k-a ]as follows: EQU x.sub.k.vertline.k-n =E{x.sub.k .vertline.z.sub.0 z.sub.1 . . . z.sub.k-n } EQU .SIGMA..sub.k.vertline.k-n =E{x.sub.k.vertline.k-n x.sub.k.vertline.k-n.sup.T .vertline.z.sub.0 z.sub.1 . . . z.sub.k-n }(9)(9)
The Kalman filter controller shown in FIG. 4 would apply to the discrete case described above if the following changes were implemented: replace Kalman filter 13 with the discrete Kalman filter of FIG. 6; provide sensor unit 12 with ADC means for periodically sampling and quantizing z(t); and provide controller 14 with a digital-to-analog converter (DAC) output with first order hold output means if an analog input to plant 11 is required. In this manner, the input would be held constant during the interval between successive samples of z(t). The control 14 gain matrix would be calculated by minimizing an appropriate discrete performance index, J(k).
Because the computation of the state vector covariance, .SIGMA., can lead to numerical instability, it is advantageous to deal with the square-root, S, of the covariance, .SIGMA.=SS.sup.T, because the product SS.sup.T can not lead to a matrix that fails to be nonnegative definite as a result of computational errors. Also, the numerical conditioning of S is generally much better than that of .SIGMA. and only requires half as many significant digits for computation. The covariance square root matrix, S.sub.k.vertline.k, can be update using a matrix transformation as follows: ##EQU2## where S.sub.k+1.vertline.k.sup.T is a nxn upper triangular matrix and T is any orthogonal matrix that makes S.sub.k+1.vertline.k.sup.T upper triangular. Similarly, the both time and covariance updates may be obtained by the following operations: ##EQU3## where T is an orthogonal matrix that causes the transformed matrix on the left to be upper triangular (Reference: B. D. O. Anderson and J. Moore, Optimal Filtering, Prentice Hall, 1979 pp. 148-149).
The discrete model described above replaces differential equations needed to describe most practical physical systems that operate in the continuous domain by finite difference equations that may not adequately describe the physical system that is to be controlled. The discrete model is only an approximation to the integration process of the continuous model that may require high sampling rates (and consequently more processing) in order to provide a acceptable approximation to the continuous model. This problem becomes more acute when these techniques are applied to nonlinear time varying system models that more accurately describe practical processing systems. Prior art has attempted to extend the discrete Kalman filter control technique from the linear time-varying type of system to nonlinear time-varying systems by a method known as extended Kalman filtering (EKF).
A rather general nonlinear model for a nonlinear plant is given by the following nonlinear differential equations: EQU x=f(x,u)+w EQU z=g(x)+v (12)
where u,v,w,x, and z represent the same variables as previously defined but with the dependent time variable, t, suppressed, f(.) is a nonlinear function of both the state vector, x, and the input vector, u, and g(.) is a nonlinear function of the state vector, x. The corresponding nonlinear plant block diagram model, 40, is shown in FIG. 7 where input unit 401 accepts input vector u and state variable feedback x and produces the nonlinear output vector f(x,u) to which vector summer 402 adds noise vector w to form the differentiated state variable vector, x. Vector integrator 403 operates on x to produce state vector x at the output. State vector x is fedback to input unit 401 and to measurement nonlinear transformation unit 404 that transforms vector x by forming the output vector g(x) to which noise vector v is added for producing the measurement observation vector z.
Because the optimal nonlinear system state vector estimation is generally computationally intractable, common practical approaches approximate equations (12) by using finite difference equations and by adapting the linear methods discussed above after linearizing the above nonlinear plant differential equations (12) along a nominal trajectory, so that the equations take on the following form: EQU x.sub.k+1 =A.sub.k x.sub.k +B.sub.k u.sub.k +w.sub.k EQU z.sub.k =C.sub.k x.sub.k +v.sub.k,
for EQU k=1,2, . . . (13)
These equations represent a time-varying linear model with index k, and discrete time state vectors. The nominal trajectory for the non-linear equations (12) must be available for computation of the linearized model.
Anderson et al. (op. cit., p. 195) apply Kalman filter methods to obtain a suboptimal estimator for the discrete model given by the following equations: EQU x.sub.k+1 =f.sub.k (x.sub.k)+g.sub.k (x.sub.k)w.sub.k EQU z.sub.k =h.sub.k (x.sub.k)+v.sub.k (14)
where the trajectory index and the time sampling index are both k, f.sub.k (.) and h.sub.k (.) are nonlinear time dependent operators, g.sub.k is a time-dependent gain, v.sub.k and w.sub.k are mutually independent zero-mean white gaussian processes with covariances R.sub.k and Q.sub.k respectively and x.sub.0 =N(x.sub.0, P.sub.0).
The nonlinear model is linearized by use of a Taylor series expansion about x=x.sub.k.vertline.k, so that EQU f.sub.k (x.sub.k).congruent.f.sub.k (x.sub.k.vertline.k)+F.sub.k (x.sub.k -x.sub.k.vertline.k) EQU g.sub.k (x.sub.k).congruent.g.sub.k (x.sub.k.vertline.k).ident.G.sub.k(15) EQU h.sub.k (x.sub.k).congruent.h.sub.k (x.sub.k.vertline.k-1)+H.sub.k.sup.T (x.sub.k -x.sub.k.vertline.k)
where EQU F.sub.k =.differential.f.sub.k (x.sub.k)/.differential.x.vertline..sub.x=x.sbsb.k.vertline.k
and EQU H.sub.k.sup.T =.differential.h.sub.k (x)/.differential.x.vertline..sub.x=x.sbsb.k.vertline.k-1.
Assuming that x.sub.k and x.sub.k.vertline.k-1 are known, the resulting quasi-linear signal model equations may be written so that they are similar in form to the linear equations used previously, as follows: EQU x.sub.k+1 =F.sub.k x.sub.k +G.sub.k w.sub.k +.chi..sub.k EQU z.sub.k =H.sub.k.sup.T x.sub.k +v.sub.k +.eta..sub.k (16)
where .chi..sub.k and .eta..sub.k are available from estimates as EQU .chi..sub.k =f.sub.k (x.sub.k.vertline.k)-F.sub.k x.sub.k.vertline.k EQU .eta..sub.k =h.sub.k (x.sub.k.vertline.k-1)-H.sub.k.sup.T x.sub.k.vertline.k-1 (17)
The corresponding EKF equations are as follows: EQU x.sub.k.vertline.k =x.sub.k.vertline.k-1 +L.sub.k [z.sub.k -h.sub.k (x.sub.k.vertline.k-1)] EQU x.sub.k+1.vertline.k =f.sub.k (x.sub.k.vertline.k) EQU L.sub.k =S.sub.k.vertline.k-1 H.sub.k .OMEGA..sub.k.sup.-1 EQU W.sub.k =H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 H.sub.k R.sub.k(18) EQU .SIGMA..sub.k.vertline.k =.SIGMA..sub.k.vertline.k-1 -.SIGMA..sub.k.vertline.k-1 H.sub.k [H.sub.k.sup.T EQU .SIGMA..sub.k.vertline.k-1 H.sub.k +R.sub.k ].sup.-1 H.sub.k.sup.T .SIGMA..sub.k.vertline.k-1 EQU .SIGMA..sub.k.vertline.k+1 =F.sub.k .SIGMA..sub.k.vertline.k F.sub.k.sup.T +G.sub.k Q.sub.k G.sub.k.sup.T
where EQU .SIGMA..sub.0.vertline.-1 =P.sub.0 =E{(x.sub.0 -x.sub.0)(x.sub.0 -x.sub.0).sup.T,
and EQU x.sub.0.vertline.-1 =x.sub.0.
FIG. 8 is a block diagram that shows the EKF equations (18). FIG. 8 is labeled so as to correspond to the variables and matrices of equations (18).
The prior art EKF technology must be selectively and judiciously applied. Otherwise, poor performance can result due to stability of numerical integration, stability of the filtering equations and numerical error build-up. The finite time increments used in difference equation approximations can require fine increments that begin to approach infinitesimal increments. Consequently, the additional computation resulting from these fine-increment difference equations can impose an unacceptable computational burden on the control system processor.
Jazwinski, (A. H. Jazwinski, Stochastic Processes and Filtering Theory, Academic Press, 1970) describe an Extended Kalman Filter for continuos-time system without exogenous inputs u(t). Equations (14) above are replaced by EQU x=f(x,t)+G(t)w EQU z=h(x,t)+v (14a)
The filtering equations are identical to (18) except that the state estimate time propagation is performed by direct numerical integration given by ##EQU4##
The use of direct numerical integration of the continuos-time nonlinear system presents several problems. Stability of the numerical integration, stability of the filtering equations and the computational load for real-time use are the main problems.
Instability of numerical integration arises from selection of step size in numerical integration to achieve feasible computational loads. Instability of the filter equations arise from mismatch between the linearized signal model used in the filter update and the implicit signal model in numerical integration. Typical physical models have widely different time scales leading to stiff differential equations (Reference: G. Golub and J. Ortega, Scientific Computing and Differential Equations, Academic Press, 1992). Prior methods using variable step solvers such as Gear's stiff method are used for such systems. Such variable step solvers impose highly variable data-dependent computational load. Highly variable computation load makes these methods inappropriate for real-time use. Fixed step methods such as Runge-Kutta or Euler require very small step size to ensure stability of numerical integration itself. Lack of numerical stability of integration leads to large errors. Numerical stability of the integration is not sufficient to provide stability of the real-time filter.
The present invention, by contrast, imposes fixed computation burden, permits time propagation in a single step that ensures numerical stability of integration as well as that of the filter. Stability of the filter in the present invention is arises from ensuring that
1) the time update for the state estimates and the covariance updates for the filtering equations use consistent time-varying linear system signal models; and PA1 2) the time-varying linear signal model satisfies conditions for observability, controllability through disturbances and positivity of initial state error covariance matrix P.sub.0. PA1 1) the use of numerical integration in the solution of continuous time nonlinear differential system equations in the state estimate time update that imposes low fixed, deterministic computation load; PA1 2) covariance update and gain calculations that ensure stability of the combined state estimate update and the filter update in spite of mismatch between the system generating the signals and its assumed signal model in the filter; PA1 3) the numerically stable square-root method for updating of the state vector covariance matrix should be used; and PA1 4) reduced probability that minimization of the estimation error metric results in a local minimization rather than the desired global minimum. PA1 (a) constructing a set of state equations representing the nonlinear system; PA1 (b) establishing a set of state variable values for a current time state vector, and a set of matrix element values for one each covariance matrix for a current time state vector, for a current time state noise vector, and for a current time output measurement noise vector; PA1 (c) acquiring a current time measurement vector; PA1 (d) updating the state covariance matrices using the current time measurement covariance; PA1 (e) estimating an updated current time state vector from the current time measurement vector by use of a state vector estimating filter that operates on the current time state vector using the current time measurement vector, the covariance matrices for the current time state vector, for the current time state, and for the current time measurement noise vector, the estimated updated state vector representing the state vector at the current measurement time; PA1 (f) projecting the estimated updated state vector forward in time by integrating the system state equations over the prescribed time interval between the current prescribed measurement time and the next prescribed measurement time for obtaining a current state vector; PA1 (g) projecting the updated state vector covariance matrix forward in time by using the results of the system equation integration of step (f); and PA1 (h) at the next measurement time iterate steps (c)-(g).
In the present invention, these conditions are easily satisfied regardless of accuracy of the non-linear system equation (12), (19) with respect to the real system that generates signals used by the filter so that stability is preserved under large model errors. This property is useful in applications of CTEKF for adaptive processing, real-time system identification, hypothesis testing, fault detection and diagnosis where large model errors must be tolerated in real-time. Consequently, the resulting state estimates are stable and avoid the unstable consequences of prior-art EKF state estimation methods.
Prior-art formulation (14) and (14a) does not explicitly account for exogenous inputs u(t). In digitally controlled systems, exogenous inputs are typically outputs of zero-order-hold circuits. Exogenous inputs play a critical role in behavior of the controlled systems and have to be accurately represented in the model used in the filtering equations. Present invention addresses exogenous inputs explicitly. It also handles the digitally controlled exogenous inputs effectively.
A sensor processing method for the implementation of extended Kalman filter techniques for continuos-time nonlinear systems should provide the following features: