One of the primary factors contributing to ranging inaccuracies in radar transponders or beacons is the variation in transponder delay with changes in input signal strength. The change in delay results from the transponder responding to the input signal at different points on the leading edge as the input signal amplitude changes. This occurs because of finite rise times and fixed thresholds of the signal detection circuits. The effect is illustrated in FIG. 1. Smaller signal 10 has threshold point 12 while larger signal 14 has threshold point 16 based on the threshold level being set at line 18. Time differential 20 is denoted as .DELTA.t. It may readily be seen then that .DELTA.t is a function of amplitude of the input signal.
In the past, use of slow automatic gain control (AGC) systems and the use of a delay stabilizer in conjunction with a Linear-logarithmic IF amplifier has been used to minimize this effect. However, slow AGC is objectionable in many applications. In a situation where the transponder is being interrogated by more than one tracking radar, the strongest received signal captures the receiver and the weaker signals will not produce any stabilized output. The logarithmic IF amplifier and delay stabilizer overcome this objection, but the precision of the circuit is limited to the ability of the IF amplifier to produce an output that is truly the logarithm of the input.