Today's large and complex digital integrated circuit (“IC”) designs typically include millions of logic gates. As part of the design cycle, these designs have to be checked to ensure their proper function and that their performance requirements are met. The modeling, verification, and optimization process for checking these designs involves extensive timing analysis. Electronic design automation (“EDA”) tools are use to automate and aid in speeding up this process.
One aspect affecting the timing analysis of IC designs is the effect of variation in circuit components induced by the normal variations in the fabrication process. The variation in process parameters needs to be taken into account in order to provide accurate timing modeling and subsequently high yielding manufactured components. EDA tools include techniques for modeling the typical process parameter variations of semiconductor fabrication processes. Conventionally, these techniques are based on “worst-case” conditions or statistical approximations.
The effect of process variation on timing performance is increasing as feature sizes shrink. As this effect increases, the current methodology for modeling, verification, and optimization of designs is encountering problems. Current EDA tools use static timing analysis techniques to verify proper operation at the desired speed and to find design elements that can be changed to fix problems or improve the performance of the circuit.
However, the current timing methodology fails to provide an accurate modeling of process variation because it is based on worst-case analysis. Worst-case analysis has several problems. Worst-case conditions can be overly pessimistic. Fabricated chips are often much faster than predicted. Worst-case conditions assume that everything goes wrong that possibly can. In reality, this is unlikely. In addition, given the many sources of variation, it is hard to provide an accurate model of the worst-case operating conditions. Variations can occur due to wire widths, thicknesses, transistor lengths, oxide thickness, temperature, and voltage. Moreover, the worst-case conditions can be path dependent. Depending on how they are routed on the various metal layers, different paths can have different sensitivities to wire width variation. For example, one path may have a tighter timing constraint when the metal on layer 2 is wide and the metal on layer 3 is narrow, while another's timing is tighter when the situation is reversed.
To address these problems, statistical timing analysis techniques have been proposed. Many of these techniques treat delays as independent random variables with known probability density functions (“pdf”). The pdfs for delays are used to obtain pdfs or related cumulative density functions (“cdf”) for signal arrival times and slacks. These pdfs and/or cdfs are then used to calculate the probability that a particular slack value will be nonnegative.
However, existing methods for statistical timing analysis assume that delays are independent. In contrast to the worst-case timing analysis assumptions, this assumption can lead to significant optimism, which can result in greater yield impact due to process variation. Further, these statistical timing techniques do not demonstrate the relationship between yield and controlling factors, which makes yield improvement analysis more complicated.
Thus, what is needed is a method for modeling process parameters in circuit timing analysis that (1) provides a more realistic process variation estimation, (2) helps identify process parameters that should be controlled more closely to increase yield of a particular design, and (3) helps identify changes that can be made to a design to increase fabrication yield.