A. Field of the Invention
This invention relates to missile guidance, and more particularly to real-time computation of missile guidance commands for optimal guidance of the missile.
B. Description of the Related Art
1. General Theoretical Art
a. Calculus of Variations
The basic objective of the calculus of variations is to find the unknown function which will locate an extremum for a definite integral. Ordinary calculus locates the extremum for a function of one or more variables. For example: EQU I=V[thr,z,x,y,u] (Equation 1)
is a velocity function. This function varies with the positions x,y,z, the thrust (thr) and the control variable u. To find the extremum of the velocity, the first variation of the velocity equation is used: ##EQU1## Following it has been shown that the aforementioned variational relation simplifies to the following set of equations: ##EQU2## The variables which solve these equations define an extremum for velocity.
As noted earlier, it is the extremum of a definite integral which is formally in the domain of the calculus of variations. Using another function for velocity provides the example: ##EQU3##
In lieu of an extremum for a function, an extremum for a definite integral is found. And in lieu of finding the appropriate variables, the appropriate functions (h) are found. It has been shown that the necessary and sufficient condition for the solution to this type of extremum problem can be the solution of the Euler-Lagrange equations: ##EQU4##
The Euler-Lagrange equations are obtained by setting the "first variation" of the integral in Equation 4 to zero. The first variation is found by varying each of the functions h.sub.i (x) by an amount .epsilon..sub.i u.sub.i (x) where .epsilon..sub.i is an arbitrary constant independent of x and h.sub.i (x), and u.sub.i (x) denotes any arbitrary function of x which is independent of .epsilon..sub.i. Then the integral I+.delta.I is defined by: ##EQU5##
Upon solving for 6I to obtain a power series expansion of .delta.I in terms of powers of .epsilon..sub.i, the "first variation" is the coefficient of the term for .epsilon..sub.i to the first power.
It is still necessary, for the problems of the type in optimal control, to solve a variational problem with auxiliary conditions. Now consider that the functions h.sub.i are not independent but restricted by some auxiliary conditions. Initially we considered velocity as a function of several variables x, y, z, thr and u. In solving the definite integral we allowed the integral of velocity to be several functions of one variable t. Now we will consider a system composed of several functions of several variables. The auxiliary system functions will be portrayed as f.sub.i, with the variables as x.sub.j and t, creating the set of equations: ##EQU6## with V[h.sub.i (x.sub.j,t)] as before. The auxiliary condition on the integrand creates a problem when the system functions are not known to be independent. As an expletive, if the system functions were independent, the independent equations could be expressed under separate integrals as follows: ##EQU7## From this set of equations would follow m+1 independent Euler-Lagrange equations. However, since it is not known that the functions are independent, knowledge of the other relationships is required. Using a Lagrange multiplier, the previously assumed independent equations may be expressed as dependent equations under the same integral as: ##EQU8## A solution set utilizing the Euler-Lagrange equations can be shown to be: ##EQU9##
An alternative methodology to the Euler-Lagrange technique for solution to the variational problem with auxiliary conditions is the Finite Element Method. This alternative methodology is a key element in making optimal missile guidance "viably eligible" for real-time processing.
b. Finite Element Method (FEM)
Given the integral: ##EQU10## The time intervals can be broken up into N elements with a normalized elemental time definition of: ##EQU11## As an example, the aforementioned equation using a trapezoidal integration scheme becomes: ##EQU12## and are of a similar nature to the forthcoming equations used in the weak Hamitonian method of optimal missile guidance. The finite discretization appears similar to trapezoidal integration because of the selection of the shape functions.
2. Application Specific Art
a. Missile Guidance
Missiles are rocket-motor-boosted vehicles that are designed to fly pilotless from a launch position to a desired target position. Various techniques have been used to guide a missile depending upon the launch and target positions and the operational characteristics of the missile.
A first class of techniques uses an external processor for performing computations to derive guidance commands for a missile. The external processor is located on the ground or in an airborne launch vehicle. The guidance commands are transmitted from the external processor to the missile via a radio frequency or laser uplink communication path. In response to the transmitted guidance commands, onboard instruments guide the missile.
A second class of techniques uses computations that are performed onboard the missile. Inputs for such onboard computations may come from instruments carried onboard the missile itself or from signals transmitted to the missile from external sources.
Generally, instruments carried in a missile for performing onboard guidance include inertial devices that sense body accelerations and rotational rates with respect to three orthogonal axes. These inertial measurements provide position, velocity, and attitude information to a programmed digital processor unit that computes the guidance commands for the missile.
Optimal guidance of a missile involves guiding the missile to optimize a preselected performance function (J) subject to certain constraints.
The real-time control of the missile involves the additional problem of determining control values as a function of time in order to guide the missile along a trajectory that will optimize the performance function J subject to certain inequality constraints on the control variables. An inequality constraint is a constraint that is an inequality. In other words, an inequality constraint is a prescribed condition that one quantity is less than, less than or equal to, greater than, or greater than or equal to another quantity. An inequality constraint function is a function that defines an inequality constraint. For a missile, the control variables include the angular acceleration of the missile, up to certain maximum values. For a solid-fueled rocket, the thrust generally is not controlled, although the thrust may be controlled for a liquid fueled missile, up to a certain maximum value.
b. Weak Hamiltonian Optimization Method (WHOA)
For solving the missile control problem, the dynamic state of the missile is represented by E state variables x.sub.i (t) satisfying E "state equations" that are the first-order differential equations: ##EQU13## The n state equations are specific formulations of the accessory conditions in equation (3) above, in which the time t has been selected as the independent variable. The state variables x.sub.j (t) typically include the missile's coordinates (x, y, z), angular orientation, linear velocity, angular velocity, and mass. The functions u.sub.1 (t), u.sub.2 (t), . . . , u.sub.r (t) are the control variables. The control problem can be expressed as the problem of finding the control variables that maximize or minimize the definite integral ##EQU14## In general, the final time t.sub.f will be an unknown.
Real-time control of the missile requires a numerical method of solving the equations subject to suitable boundary conditions. Due to limited data processing capability when a numerical solution is computed in the missile, a finite-element procedure is used in which the control variables are iteratively computed for a time t+.DELTA.t based on conditions existing at a time t. The computation should give a finite approximation of an optimal solution. Additionally, the computation should be "unconditionally stable."
A computational technique having these desirable properties is proposed in Hodges et al., "Weak Hamiltonian Finite Element Method for Optimal Control Problems," Journal of Guidance Control, and Dynamics, Vol. 14, No. 1, January-February 1991, American Institute of Aeronautics and Astronautics, pp. 148-156,incorporated herein by reference; and Bless et al., "Finite Element Solution of Optimal Control Problems with Inequality Constraints," Proceedings of the 1990 American Control Conference, San Diego, Calif., May 23-25, 1990, Volume 1, American Automatic Control Conference, pp. 242-247, incorporated herein by reference.
The weak Hamiltonian method is a combination of optimal control theory and finite element discretization with weakly coupled boundary conditions. Permitting a finite element solution to the first variation, the performance function (J) is formulated as: ##EQU15## where: L is an integrand component of the performance function;
.lambda. is a matrix of unknown Lagrangian multiplier functions for adjoining the system state equations; PA1 T denotes the matrix transpose operation; PA1 f is a matrix of system state equations; PA1 x is a matrix of the state rates with respect to time; PA1 .mu. is a matrix of unknown Lagrangian multiplier functions for adjoining the control constraints; PA1 G is a matrix of control inequality constraint functions G(x,u,t) for control functions u(t) such that G.ltoreq.0; PA1 K is a matrix of slack variables such that G+k.sup.2 =0; .PHI..ident..phi.+.upsilon..sup.T .PSI. where PA1 .varies. is a matrix of unknown discrete Lagrangian boundary condition multipliers for natural coupling of finite element values to finite element nodes of the states at the initial and final times t.sub.o and t.sub.f ; and PA1 x is a matrix of finite element nodes of the states at the initial and final times t.sub.o and t.sub.f, such that
.phi. is a performance function of the states and time, defined only at the initial and final times t.sub.o and t.sub.f ; PA2 x.vertline.t.sub.o is the limit of x(t) as t.fwdarw.t.sub.o ; PA2 x.vertline.t.sub.f is the limit of x(t) as t.fwdarw.t.sub.f ; PA2 x.vertline.t.sub.o is x (t.sub.o); and PA2 x.vertline.t.sub.f is x (t.sub.f).
.upsilon. is a matrix of unknown discrete Lagrangian multipliers defined only at the initial and final times t.sub.o and t.sub.f for adjoining boundary condition constraints;
.PSI. is a matrix of boundary condition constraints imposed at the initial and final times t.sub.o and t.sub.f ;
As described in Hodges et al. and Bless et al. cited above, the first variation is taken of the performance function (J) of Equation 16. The first variation has an x term, which is then integrated by parts. The boundary conditions are placed in weak form, and the first variation is set to zero, giving the following solution for a minimum or maximum to the performance index J of Equation 16: ##EQU16##
Equation 17 is the governing equation for the weak Hamiltonian method with control constraints. An important advantage of the weak Hamiltonian method is that time derivatives of the states and co-states do not appear in Equation 17. This facilitates the simplistic choice for the shape functions which leads to the simplified integration which looks like a trapezoidal integration scheme.
Equation 17 is solved by performing finite element discretization. The integral in Equation 17 is broken into N integrals, with each of the N integrals being an integral over one of the N finite elements, respectively from t=t.sub.o to t=t.sub.f. Constant shape functions are used for u, .delta.u, x, .lambda., .mu., and K, and linear shape functions are used for .delta.x , .delta..lambda., .delta..mu., and .delta.K. The integral over each element can therefore be evaluated by inspection, and the resulting integral in Equation 17 becomes a summation over N elements. Solutions are found for only discrete mid-point values for the controls u(t). The boundaries between the finite elements are referred to as nodes, which occur at t=t.sub.i. When there are no state discontinuities, the nodal values of the shape functions in the neighboring elements cancel each other in the summation. When there are state discontinuities, the nodal values of the shape functions in the neighboring elements do not necessarily cancel each other in the summation. This problem is conveniently solved by treating the state discontinuity as a pair of boundary conditions at the node, one boundary condition for the integral or summation for a time less than but approaching t.sub.i in the limit, and the other boundary condition for the integral or summation greater than but approaching t.sub.i in the limit. In other words, the time line and the performance integral in Equation 9 is broken into two different "phases" by the state discontinuity, and a respective discrete Lagrangian multiplier is introduced for adjoining each of the two boundary conditions into the performance function (J).
The summation over N elements becomes a system of simultaneous algebraic equations. In general, 2n of the 4n endpoint values for the states and corresponding Lagrangian multipliers (x.sub.o, x.sub.f, .lambda..sub.o, .lambda..sub.f) must be specified. The initial conditions (x.sub.o) are known in accordance with physical constraints. Also, .lambda..sub.f can be specified in terms of other unknowns, because the admissible variations of the states must be continuous at the initial and final times: ##EQU17## Therefore, the system of algebraic equations includes as many equations as unknowns, and it can be solved for the control variables.
Normally, the system of equations can be solved by expressing the Jacobian explicitly and using a Newton-Raphson solution procedure. The Newton-Raphson solution procedure is described, for example, on pages 43-44 of the CRC Handbook of Mathematical Tables, 1980, Chemical Rubber Company Press Inc., Boca Raton, Fla. Let the system of algebraic equations be designated as E.sub.i (X)=0, where X is a vector of unknowns. The Newton-Raphson solution procedure is an iterative method that starts with a trial solution X.sup.[0 ], and obtains successive approximations X.sup.[j+1] by solving the simultaneous linear equations ##EQU18##
Equation 21 can be written as a matrix equation EQU E(X.sup.[j])+[J] (X.sub.k.sup.[j+1] -X.sub.k.sup.[j])=0 (Equation 22)
where [J] is the so-called Jacobian matrix having the elements that are the partial derivatives ##EQU19## evaluated at X.sub.k =X.sub.k.sup.[j]. Equation 22 can be solved by inverting the Jacobian matrix, to obtain the iterative equation: EQU X.sub.k.sup.[j+1] =X.sub.k.sup.[j] -[J].sup.-1 E(X.sup.[j])(Equation 23)
A sparse matrix solver can be used as coded in Duff, I.S., Harwell Subroutine Library, Computer Sciences and Systems Division, Harwell Laboratory, Oxfordshire, England, February 1988, Chapter M. To solve for the solution at the first element, an initial guess can be made based on the initial conditions. An initial guess for successive elements can be based upon the solutions for a number of previous elements. In general, the iterations of the Newton-Raphson procedure converge rapidly, so that a large number of elements can be solved with a very efficient run-time on a computer.
Successful flight of a missile to its target and the missile's arrival at the target with a desired attitude and velocity depends upon the nature of the guidance procedure and its mechanization. Current guidance procedures and their mechanizations have failed to achieve the potential success suggested by optimal guidance and control theory. This results primarily from approximations and mathematical truncations imposed in order to enable completion of the guidance computational task in an acceptable time frame.
The weak Hamiltonian finite element method used for the basis of this invention failed to robustly converge upon an optimal solution when put into practice, as have all of the published schemes for real time optimal guidance. In particular the method as published has been subject to convergence problems when the control variables enter certain ranges. When the control values either enter around zero or close to a control boundary, the published method fails to converge upon a solution.