Computer implemented control laws are widely employed in physical systems, such as, aircraft engines, turbines, chemical plants, oil refineries, etc., for controlling and predicting behavior of those systems, often in real time. Three types of control laws are frequently employed, namely, model inverting control laws, model predictive control laws and linear quadratic Gaussian control laws. While all of the aforementioned control laws are multivariable control laws capable of decoupling responses of naturally cross coupled systems, where effector (e.g., means for adjusting or manipulating control variables, such as, thrust) changes simultaneously affect multiple goals (or outputs), several key differences between the three control laws exist. To facilitate a proper understanding of this disclosure, each of the three control laws is briefly discussed below.
With respect to model inverting control (MIC) laws, they may include constrained dynamic inversion (CDI), dynamic inversion (DI), backstepping, and feedback linearization control laws to design and implement desired controlled system responses to commands as an analytical model. CDI determines the current effector requests and make a second model match the desired output in one next time step. The CDI presumes that both the dynamics of the physical system (e.g., the engine) to be controlled and the desired response dynamics (e.g., outputs) are known in the form of analytical models. CDI also presumes that limits (e.g., physical limits of the physical system that must be observed) are given, and that the desired feedback properties are specified. CDI's objective is to make goal variables track the desired response while holding all limits. This type of control law has several advantages. CDI can handle actuator limits and general output limits, for example, a maximum safe internal temperature. The key disadvantage of model inverting control laws is that they are unstable if they are working with active goals and limits for which the plant dynamics are either unstable or non minimum phase (NMP). NMP is a type of dynamic response in the frequency domain where multivariable system dynamics can be represented by matrices of transfer functions, which may be analyzed in terms of their poles and transmission zeros. Poles represent the modes of the dynamic response and may be stable or unstable. NMP systems have transmission zeros that would be unstable if they were poles. CDI is also sensitive to model errors. Furthermore, CDI predicts signals at only one future controller update (e.g., one time step ahead) and thus, requires that the system be controllable in one step.
Model predictive control (MPC), on the other hand, is similar to CDI but with some key differences. For example, whereas CDI determines effector settings based on prediction optimization including only one future controller update (e.g., one time step ahead), MPC bases effector settings on an optimization spanning N prediction steps ahead, where N is greater than one. This affects the control law function and implementation. Functionally, MPC is more robust than CDI, more tolerant of model errors and requires N step controllability, rather than 1 step. Furthermore, if N is large enough, then MPC may stabilize an unstable system and may be robust to NMP dynamics as well. This robustness can be exploited for a number of benefits. First it allows the model to be adaptive. An adaptive model can more accurately predict behavior of the physical system, accommodating for manufacturing variations, wear and aging, and even damage. Also, the risk of adding, removing, or modifying goals and limits is reduced with MPC. In CDI, any of these changes may change the system from MP to NMP. Thus, MPC is attractive in that it can better tolerate non-minimum phase dynamics and unstable modes, does not require 1-step controllability and can hold all limits. MPC control laws are also less sensitive to model errors and variations than are CDI control laws as mentioned above.
However, MPC implementation requires substantially more computation than CDI. If the most computationally efficient methods are used, then MPC may require approximately 2N to 5N times more operations than CDI, where N may vary from 10 to 100. This large amount of computation makes MPC unsuitable for control of systems that require frequent updates (e.g., about every 25 milliseconds or so).
A third type of control law known as the linear quadratic Gaussian control or LQG is similar to MPC except that it cannot hold limits. The advantage of LQG is that it typically has good feedback stability margins, automatically tolerates model errors, NMP dynamics and unstable modes, and is computationally simple. If N is large enough in an MPC controller, its robustness approaches that of the LQG control, but adds a limit holding capability.
All three control design methods described above are frequently combined with an estimator (such as, a Kalman filter, an asymptotic observer, a model tuner, etc.) in a design technique well known as loop transfer function recovery and the estimator may be designed to obtain the desired feedback properties.
Thus, it can be seen that each of the control laws described above has certain advantages and disadvantages. It would be beneficial if a control law could be developed that combined the advantages of the three control laws, while at least substantially circumventing the disadvantages of those laws. Specifically, it would be beneficial if a control law that provided the loop transfer function recovery, robustness, NMP tolerance, and unstable mode tolerance of the LQG controller and the limit holding capability of the CDI and MPC laws, with a considerably smaller computational burden than the MPC law could be developed.