1. Field of the Invention
The present invention relates generally to closed loop feedback control systems for actuators, and, more specifically, to an algorithm for predicting change in actuator position in order to reduce the tolerance range between maximum and minimum values utilized for detecting faults in the actuator feedback control system.
2. Description of the Related Art
In actuator feedback control systems using a position indicator in the feedback loop, such as a Linear Variable Differential Transducer (LVDT), it is necessary to detect errors or faults in the control loop. For example, the LVDT may fail, providing an erroneous reading of the actuator position. Such errors are generally detected by comparing a signal representative of the measured movement to a modeled signal based upon the demanded actuator movement, where the modeled signal has maximum (roof) and minimum (floor) values for a given demand signal. If the measured signal is outside the maximum or minimum value for a given demand signal, the system indicates a fault.
In many systems, it has been necessary to use a large tolerance range between the maximum and minimum values to avoid false faults caused by the accumulation of errors over several time intervals or frames. This may occur, for example, during a step change in demand or due to slew rate variation. This large tolerance range, however, reduces the ability of the system to detect in-range failures, and consequently to distinguish actual failures from false failures.
Accordingly, it would be advantageous to design a system where the tolerance range is minimized without increasing the risk of false faults. It would further be advantageous to design a system where the tolerance range is minimized by using an algorithm which predicts the change in actuator position for each interval of time, thereby eliminating the accumulation of errors over consecutive time intervals.