In computer systems, logic networks are used to perform a variety of functions. These networks are typically implemented in hardware form, for example, on VLSI (very large scale integration) integrated circuits. VLSI technology has enabled the fabrication of electronic circuits which provide hundreds of thousands of transistors in a single integrated circuit chip. Logic elements are constructed using one or more of these transistors, and the logic networks on the integrated circuit are constructed with the logic elements. Each integrated circuit chip may contain thousands of logic networks.
The logic elements (or gates) in a logic network are interconnected by conductive paths. Signals propagate through the network along these conductive paths through associated logic gates. The time it takes for a given signal to propagate through a particular logic network is a measurable parameter, which is in part determinant on inherent delays associated with both the logic gates and the conductive paths through which the signal passes. The length of the conductive paths is also a factor which determines this delay time.
The total delay time for a signal to pass through the same type of logic network implemented over a large number of integrated circuit chips may be approximated as a Gaussian or normal distribution. FIG. 1 shows such a distribution. The horizontal axis represents the total delay measured for a particular type of logic network implemented in a number of integrated circuits, and the vertical axis represents the number of integrated circuits.
The total delay time varies from integrated circuit chip to integrated circuit chip due in part to physical differences in the integrated circuit chips in which the logic networks are implemented. These differences are caused, for example, by physical variances in raw material used to construct the chip and minor changes in production processes, both of which may vary over many production batches, as well as variances in the environmental conditions to which the chips are exposed (e.g., temperature, supply voltage, etc.). Because the entire range of speed variations under the curve of FIG. 1 applies to the accumulated production of a process line over many days, batches, and raw materials, it is extremely unlikely that two implementations of a logic network on a single die produced on a common silicon substrate would exhibit delays at the extremes of the curve. For chips that exhibit the fastest response times, even relatively slow networks would exhibit speeds near the left half of the curve (to the left of t1). For chips that exhibited the slowest behavior, the vast majority of the times encountered for networks on such a chip would be located near the right half of the curve (to the right of t2).
The types of signals which propagate across a logic network include clock and data signals. Logic networks which utilize both clock and data signals often rely on close synchronization of these signals for the networks to operate properly. Data or clock signals are designed to arrive at particular logic gates at precise times before or after the other. If, for example, a clock or data signal is expected at a particular logic gate in the network, and that clock or data signal arrives too early or late with respect to the other, the network may either hang up or provide incongruous output results.
Accordingly, manufacturers of integrated circuits have developed delay tolerances which must be met by a particular integrated circuit for it to pass inspection and insure that logic networks within the integrated circuit will operate properly. Timing analysis is preferably performed on proposed network designs prior to the actual physical layout of the logic networks. The proposed network designs are analyzed to determine unknown parameters of the networks which may prevent the correct timing operation of the network.
As the design of logic networks in computers becomes more complex, and as the speeds of operation of these networks continue to increase, the synchronization of the timing sequences of the logic elements in these networks must necessarily become tighter. Accordingly, the pre-fabrication analysis of the timing of these logic networks must become increasingly more accurate, and the subsequent re-design of the circuits must become more efficient. Timing analysis of complex, high speed digital network designs is required both to determine if a predetermined performance objective has been met by the design, and to provide information helpful in the redesign of networks which fail to meet such a performance objective.
Because manufacturing and environmental factors affect the delay of the logic networks implemented in different integrated circuit chips, and these factors exhibit some degree of correlation, statistical timing analysis is helpful in identifying potential timing problems in these networks. A sure method of verifying the performance of a particular design is to compare the latest possible arrival time of a particular signal which must arrive first to a test point against the earliest possible arrival time of a particular signal which must arrive last. For example, as shown in FIG. 2 which represents a statistical distribution of the arrival times for a large number of data and clock signals propagated through a logic network, a performance parameter dictates that a data signal must arrive before a clock signal. During the time interval int.sub.p, some clock signals arrive before some data signals in the distribution, thereby violating the established performance parameter. Accordingly, one can assure performance of the analyzed circuit design by eliminating networks which exhibit timing characteristics within the time interval int.sub.p.
However, such a method of analyzing the performance of a logic network design is overly pessimistic, because, as explained above, generally the networks within the same integrated circuit are manufactured under similar conditions and are subject to similar environmental variables, and thus their delays are usually well correlated. However, this degree of correlation is not total. Therefore, it is overly optimistic to verify the performance of a particular design by comparing only the earliest possible arrival times of the data signals with the earliest possible arrival times of the clock signals (int.sub.o1) or by comparing only the latest possible arrival times of the data and clock signals (int.sub.o2).
Statistical methods of analyzing timing information in logic networks which recognize the correlation of this timing information are known. Examples of such methods are the so-called Monte Carlo method, the method disclosed in U.S. Pat. No. 4,924,430, or the ETE (early timing estimator) method. In the Monte Carlo method, a large number of separate timing analyses are performed in which corresponding clock and data signals are propagated along a particular logic network. In each analysis, the delays for all networks are randomly selected based on the expected distribution of delays for the networks and the expected correlation between those delays. The differences in arrival times for clock/data signals (slack) is computed to determine the worst case difference. A network design passes the Monte Carlo test if none of the analyses shows a problem. Because hundreds or thousands of separate analyses may be needed to reach a confidence level that no problems exist, the Monte Carlo method is costly in both computer time and storage.
The method disclosed in U.S. Pat. No. 4,924,430 computes clock/data slacks for all integrated circuits by determining values for the absolute minimum and maximum delays experienced across all chips, a relative minimum delay for a chip assuming that another chip provides the absolute maximum delay, and a relative maximum delay for a chip assuming that another chip provides the absolute minimum delay. These four values are propagated to generate four arrival times at a plurality of test points, and comparisons are made between the absolute maximum and relative minimum delays and between the absolute minimum and relative maximum delays.
The ETE method, instead of computing individual slack times for clock/data signals, uses a statistical distribution of the arrival times for a large number of data and clock signals to calculate and propagate distributions of delays and arrival times, rather than single delay values. The distributions are used to identify nominal arrival times, the sums of the standard deviations (sigmas) of the arrival times, and the sums of the variances (sigmas squared) of the arrival times. The variance is the average of the squares of the deviations from the mean of the frequency distribution, and the standard deviation is the square root of the variance. Values for these statistical distribution components are propagated through the network. Assuming a normal distribution (a useful approximation), these values may be combined with correlation coefficients derived for individual delays to yield, at a test point, a distribution of test point slacks. If the slack distribution yields too high a probability of slack, the test fails.
Each of the non-Monte Carlo statistical methods, however, remain overly pessimistic because they ignore common portions of the network paths over which the clock and data signals are propagated. Signal arrival times are usually compared at latches, and the data and clock signals often share common path segments to the latch. As integrated circuits get bigger and clock distribution systems get longer, the common portions may increase. Except for the Monte Carlo analysis, the statistical timing analysis methods discussed above are overly pessimistic if the signals share a common path, because these methods assume that the entirety of the paths varies independently within the same allowed correlation, where in fact the delays for those segments where the signals overlap correlate perfectly since they in fact are the same delays.
Thus, it is an object of the present invention to provide a more accurate method of analyzing the timing synchronization variances in a logic network. It is a further object to provide such a method which eliminates the pessimism of non-Monte Carlo statistical methods of analyses, caused by the failure to recognize common path segments traversed by the analyzed signals, while at the same time preserving the computer storage and run time benefits of these methods.