When making measurements of time or frequency stability for performance of gyroscopes, accelerometers, precision frequency standards (atomic clocks), or their related integrated modules, one of the key measurements belongs to a family of time dependent algorithms most often referred to as the Allan variance or Allan deviation. Other members of this family of algorithms include the overlapping Allan variance, modified Allan variance, time variances, total Allan variance, total modified variance, Hadamard variance, overlapping Hadamard variance, total Hadamard variance, Theol, and the like.
The time to perform a full Allan variance, Allan deviation, or the like, for all data points, grows exponentially with the number of data points (N) and generates a plot with N/2 averaging values. One solution to reduce processing time for the Allan variance, Allan deviation, or the like, is to calculate at discrete exponentially increasing averaging times (tau). This is done in factors of two (1, 2, 4, 8 . . . ) called octaves, ten (10 (decades)), or other methods, to limit the number of points calculated from N/2 down to tau values evenly spaced on a logarithmic x-axis.
These variance calculations are often performed post-measurement because regularly recalculating such equations can still be processor and memory intensive, and such equations can evolve and grow with each additional pair of data points. Thus, variance calculations are not used as much as they could be because of the large overhead of performing these calculations. As such, it would be difficult or at least not cost effective to implement variance calculations in a gyroscope, chip scale atomic clock, cold atom atomic clock, accelerometer, frequency reference, or any other time varying measurement using low power, miniaturized processors. The problem of making variance calculations in such devices is usually solved by using a single expensive test instrument, or via post-processing using an advanced math calculation software suite.