1. Field of the Invention
The present invention generally relates to the analysis of timing in electrical circuits and, more particularly, to the analysis of complex digital machines, such as logic circuits, having a statistical variability in propagation delays which they exhibit.
2. Description of the Prior Art
All structures through which a variable electrical signal can be propagated will exhibit a finite propagation time or delay in response to variations in that electrical signal. This propagation delay will typically be different for different kinds of devices and will also vary from device to device among devices which are structurally similar. While propagation delays may be of little significance in some types of electrical circuits, they are often critical to the performance of digital machines such as logic circuits where, typically, many serial stages and many possible signal propagation paths will exist. Therefore, techniques for analyzing propagation delays in digital logic circuits are critical to the design of such circuits.
In analyzing the design of logic circuits, the timing of signals arriving at some elements, such as latches or clocked elements (hereinafter collectively referred to as race-sensitive elements), must be considered as if each possible path of a signal through the circuit were separately considered. If all possible paths through the circuit show the correct sequence of signal arrival times at each element (or if the correct sequence can otherwise be implied), the logic circuit can be considered to be unconditionally operative for a given cycle time. Conversely, the potential shortest cycle time or highest clock rate can be determined by the computation of so-called slack at each element. Slack is basically a timing tolerance which will decrease with decreasing cycle times. When slack becomes negative at any element, the minimum cycle time will have been reduced below the cycle time at which the circuit design is unconditionally operative. As is well-understood, this analysis is important in defining the specifications of manufactured devices, including, but not limited to, chips, boards, assemblies of boards and complete processors, memory systems, computers, networks and the like.
However, since all possible paths through a logic circuit must be, at least inferentially, considered and the slack at the input of each race-sensitive element must be evaluated, an enormous computational burden is presented in the analysis of all but the simplest logic circuits. For this reason, various known techniques for analyzing designs including significant statistical variation and multiple technologies resort to computational simplifications which, while allowing analysis to be performed in a reasonable amount of time, introduce departures from the true performance of the design, when implemented in manufactured devices.
One such type of technique is known as a worst-case static timing analysis in which delays are assigned for each stage and propagation delays are computed for each stage of each path through the circuit. These delays are compared at selected stages therein where the propagated signals may interact, such as at clocked logic elements. Of course, such an analysis technique cannot treat the elements of the logic circuit as ideal or even typical if useful results are to be obtained. Typically, analysis is done on the basis of "worst case" or "maximum-minimum" scenarios. Both of these scenarios assume conditions such as the latest possible signal arrival times and earliest possible clock arrival times and determine the operability of the circuit on the basis of positive or negative slack. Slack will be positive if the signals arrive at a given logic device in the proper order and negative if the signals arrive in an incorrect order to produce the desired response. Because of the assumption of worst case conditions rather than the actual, statistically variable, performance characteristics of each of the devices in the circuit, many signal paths through the circuit will seem to have defects where none, in practice, will be found to exist in a large percentage of the actual circuits, once fabricated.
Other static timing analyses achieve somewhat improved results by providing for inclusion of statistical variation to a greater or lesser extent. However, all previously known forms of these simulations carry an increased computational burden and also require simplifying assumptions to be made.
One such technique incorporating statistical variation is known as Statistical Global Static Timing Analysis. This technique calculates candidate signal arrival times at each node by standard statistical formulas using known element delay variances and user-specified correlation coefficients within a few major categories. The worst (e.g. earliest or latest) of these arrival times is used to compute the performance of the next element or node in each circuit path. For this reason, this technique is constrained to follow each path sequentially, segment-by-segment, through the device. This technique yields reasonably good results under the above constraints but breaks down when different technologies having different correlation coefficients are involved. Due to the computational burden, the technique is usually limited to the use of only five correlation coefficients, as will be discussed below.
Another known technique, referred to as Statistical Path-Oriented Static Timing Analysis uses similar statistical formulas as Statistical Global Static Timing Analysis, described above, and provides for some additional choices of correlation coefficients (e.g. for additional technologies). This technique, however, analyzes each path individually rather than the segment-by-segment analysis provided in Statistical Global Static Timing Analysis. This technique, while producing statistically accurate results in most cases, is impractical in the analysis of current designs because of the computation time involved.
While each of these above types of static timing analysis can produce usable results under some conditions, restrictions on applicability has diminished the effectiveness of such techniques as logic circuits have become increasingly complex and utilized multiple technologies in implementation. Also, as a group, all of the above static timing analysis methods exhibit a trade-off between computation time and accuracy for a given design. Individually, the accuracy of each technique is not adjustable because none provide for simulation with multiple runs to include statistically variable timing conditions. As logic designs have become more complex, techniques which provide more realistic results have required prohibitive amounts of computation time. Moreover, as aggressiveness in full utilization of logic circuit performance has increased, the most serious drawback of the various static timing analyses, described above, is the fact that each includes simplifying assumptions which preclude the improvement of the degree of accuracy of the particular technique in use.
A technique which provides the ability to predict and improve the degree of accuracy of the timing analysis is described in "Statistical Software Finds Timing Errors and Suggests Fixes" by Stanley Hyduke; Electronic Design; October, 1987; pages 75-77. U.S. Pat. No. 4,791,357, to Hyduke is also apparently related to this process. However, no publication of the process or algorithm employed by Hyduke in this Monte Carlo technique (so called due to its utilization of random number generation) appears to have been published. (It is to be understood that the present invention, since it also relies on random number generation, is also referred to as a Monte Carlo technique and, hereinafter, for purposes of clarity, the known Monte Carlo techniques, including that described in the Hyduke article, will be referred to as prior art Monte Carlo techniques.)
As a practical matter, implementations of the prior art Monte Carlo techniques also requires simplifying assumptions regarding the nature and characterization of the distribution of each element and thus are somewhat subject to the same inaccuracies and departures from accurate characterization of real devices as the static timing analyses, discussed above, posing a similar limitation on applicability of the technique. In fact, to perform the prior art Monte Carlo technique, it has been chosen, in view of programming complexity or other considerations, to assign an equiprobable, rectangular distribution to all elements, as disclosed in the Hyduke article cited above. In such a case, it is not possible to determine the exact percentage of failed nodes in a design, when implemented, but only a relative percentage of failures of each node to allow ranking of the importance of potential failure rates. When absolute failure rates are important, there is no alternative to the expenditure of large amounts of analysis time to find the actual distribution curve which may still depart from reality if simplifying assumptions are made concerning the delay distributions of the individual elements.
The prior art Monte Carlo techniques are also not easily adaptable to multiple technologies or generalization, just as they are not easily applicable to different, non-equiprobable, timing variation distributions. Typically, a random number delay assignment is made for circuits on different substrates, with delay spreads peculiar to each circuit type. Then a second random variation which corresponds to a 30% spread among identical circuits on the same substrate is superimposed on the first random delay assignment. Clearly, this constitutes only the roughest of approximations of the actual distributions which may be encountered.
A trade-off between computation time and accuracy also exists in prior art Monte Carlo techniques. As pointed out above, a fairly large number of samples must be computed to derive a statistically valid distribution of total delay times. Also, for purposes of diagnostics of the portion of the design which will exhibit a significant failure rate, the process must be applied to smaller sub-path portions of each path, increasing the computational burden. While the increased computational burden may be of minor consequence in some cases, it should be noted that a substantial amount of computational time is spent looking for unlikely events and, in practice, the range of possible variation in timing is often artificially reduced to find only a large percentage of faulty nodes during design. A further analysis is then required to find the remainder of faulty nodes once the design has been refined.
In summary, while the prior art Monte Carlo techniques potentially offer determinability of the degree of accuracy which they provide, the applicability, accuracy, computational complexity, and possibility of providing diagnostics and information useful in establishing specifications only provide improvement over static timing techniques at the cost of great expenditure of processing time. Therefore, a need is evident for application, accuracy, computation and reporting enhancements for statistical timing analysis to allow the potential of such statistical analysis techniques to be realized.