In the area of electronic design automation, there may be tests conducted at the physical design stage of an integrated circuit to determine a diagnosis and a tolerance of a timing process. Exemplary tests for evaluating timing of a circuit include setup-time violation and hold-time violation tests. As known, a setup time is that amount of time the data at a synchronous data input (e.g., data at a flip flop) must be stable before the active edge of a clock is received, and a hold time is the amount of time the data at a synchronous input of a flip-flop must be stable after receipt of the active clock edge. Further timing tests detect a violation of a timing value or circuit parameter, e.g., cap/slew violation, in a semiconductor circuit or circuit timing path that may be caused by nanometer scale process manufacturing defects or design timing model variations which may cause silicon to fail these tests during the physical design stage. Setup and hold time violations are typically detected to occur in a data shift operations or in system logic operations.
Given technology advances and integration of a broad spectrum of intellectual property continues to push the limits of power in all semiconductor applications. Currently, there are many techniques to fix setup test violations and cap/slew violations. Such techniques may include, for example, threshold voltage (VT) swaps, gate resizing, gate moving, layer promotion/demotion, buffer insertion, logic restructuring, etc.
Currently, however, it is the case that there is only a single technique to fix hold time test violations. This technique involves implementing an additional data path padding with buffers to increase the data path delay. The data path is the path wherein data traverses and may be a pure combinational path having any basic combinational gates or group of gates.
However this solution only leverages the projected slack or the worst slack across the variability space. Moreover, with increasing variability means that this test strategy may likely not hold up. For example, at a minimum, this solution may be inefficient in terms of power/area.
With the technology node becoming smaller in the future, on the order of nanometers, the manufacturing variability increases, and more timing violations can be expected thereby increasing the difficulty of achieving a timing optimization. As one source of variability, operating voltage differences may have a dramatic effect on circuit timing operations. For example, the circuit may operate nominally at a first voltage (e.g., 0.9 V), but in reality may operate at that nominal voltage plus or minus a tolerance (e.g., +/−0.2 volts). Thus, the timing delay variability may be due to a variety of different variables and circuit designers design circuits that account for this variability.
As circuits are being manufactured, it may be determined that there is much process variability and manufacturing variations. Further, as chips may not be printed exactly the same way each time leads to variability in timing violations such as delay and slew.
In future technologies, there will be further need to simultaneously optimize such circuits for: high/low voltages, and high/low frequency circuit models.
Issued U.S. Pat. No. 8,732,642 teaches a method using statistical timing analysis tools to optimize an integrated circuit design by analyzing a local failing statistical quantity, and identifying optimization transforms that can resolve the local fail. This is done by using pre-characterized “goodness” metrics (e.g., slack) so that, even without incremental timing, there is obtained an estimate of how much improvement may be achieved. This technique requires a pre-characterized metric, and has no ability to analyse an entire path leading to the failing quantity.