This invention applies to dynamical systems. The invention is a method of controlling systems and devices that have the capacity to change in time, so that their evolution is (approximately) described by deterministic laws. Examples of such systems and devices include mechanical devices, electrical circuits, chemical reactors, etc. An equilibrium of a dynamical system or device is a state of the system or device that does not change in time. Such a system or device is asymptotically stable, if perturbations of the equilibrium state are such that the system or device will eventually return to the equilibrium state of its own accord. A pendulum capable of swinging in a full circle is a simple mechanical device illustrating these concepts; it has two equilibrium positions: motionless, and either hanging down or balanced upright.
Designing systems that have a stable equilibrium at a desired state and then modifying those systems so that an unstable equilibrium becomes stable is a common engineering goal. Feedback control is the standard approach to the second phase. Measurements of the state of the dynamical system are fed to a device that affects the system's motion. In linear feedback control, the amplitude of the control force is set to be a linear function of the displacement of the system from its equilibrium. The feedback law is often computed with a microprocessor.
Again, an example of a pendulum is often used as an illustration. The fulcrum of the pendulum is placed on a cart that can be horizontally accelerated. With the proper feedback law, the pendulum can be kept vertically upright. This example may seem like a "toy", but the principles apply to general systems. For example, process control for the chemical industry often entails the application of feedback control to chemical manufacturing processes. The aforementioned pendulum example can also be regarded as a model for keeping a rocket from falling over by employing horizontal thrust at its base.
The theory of linear feedback control applies to nonlinear systems, but there are limits to its effectiveness. The domain of stability of a feedback controller consists of those states of the system which will be drawn to the equilibrium by the controller. Having a small domain of stability is undesirable, since fluctuations or disturbances may knock the system out of this domain. The results can be catastrophic: a power outage of an electrical power system or an airplane crash; the upright pendulum falls over.
This invention is a feedback control strategy that augments linear feedback control, enlarging the domain in which a system is stabilized. The strategy employs a "hybrid" or "switching" technique to guide a system into the domain of stability of a linear controller from a larger region. The general concept that this might be possible may have been proposed earlier, but the method of this invention is new.
Specifically, the invention features a hybrid strategy that can be implemented in terms of the same data computed at an equilibrium used by a linear controller. From this data, "switching surfaces" in the phase space are determined. When the system encounters one of these switching surfaces, the controller is set to a fixed value, which is maintained until the next time that the system hits one of the switching surfaces or enters the domain of stability of a linear controller. For systems with two unstable degrees of freedom, the method of the invention provides specific choices of switching surfaces, as well as set values for the controller. These lead the system to a stable oscillation that is close to the equilibrium point. When switching surfaces are chosen closer together, the size of this oscillation diminishes, so that the system will enter the region where a linear feedback control can be employed.