In digital control algorithm development, the design and performance of digital controllers is typically driven by degradation of system response as function of delay time (or sample acquire and computation time) and the cost of the hardware used in implementing the algorithm. Delay time degradation of system response results in oscillatory or unstable systems as the delay time approaches the sample period for the compensator. Typically, the ideal delay is desired to be somewhat less than half the sample period. As a rule of thumb, very good results can be obtained with digital compensator implementations whose delay time is less than 1/5 of the sample period. Typically, compensator design methods use the present input sample in calculating the present output sample. Because acquire and calculation time is finite and nonzero, there is a problem. The problem is that the system response when using these techniques deviates from the ideal case of zero delay time and degrades with delay time. It is therefore desirable to have a digital compensator that provides the ideal case of zero delay time.
Hardware cost is related to delay time because as controller demands increase, so do the deleterious effects of the delay time. By using a faster, and therefore more expensive, processor, the delay time to sample time ratio can be decreased below the 1/5 rule of thumb. The cost of the analog-to-digital and digital-to-analog converters required for the digital compensator implementation is also a factor to be considered. Generally, these devices have delay times (acquire and settling time) associated with them which can be decreased by purchasing more expensive devices.
Accordingly, it will be appreciated that it would be highly desirable to have a digital compensator that provides ideal or nearly ideal system response with inexpensive processors and converters.