1. Field of the Invention
The present invention relates in general to verifying designs and in particular to verifying a logic function in a netlist. Still more particularly, the present invention relates to a system, method and computer program product for performing target enlargement in the presence of constraints.
2. Description of the Related Art
With the increasing penetration of microprocessor-based systems into every facet of human activity, demands have increased on the microprocessor development and production community to produce systems that are free from data corruption. Microprocessors have become involved in the performance of a vast array of critical functions, and the involvement of microprocessors in the important tasks of daily life has heightened the expectation of reliability of calculative results. Whether the impact of errors would be measured in human lives or in mere dollars and cents, consumers of microprocessors have lost tolerance for error-prone results. Consumers will not tolerate, by way of example, miscalculations on the floor of the stock exchange, in the medical devices that support human life, or in the computers that control their automobiles. All of these activities represent areas where the need for reliable microprocessor results has risen to a mission-critical concern.
Formal verification techniques, semiformal verification techniques and simulation provide powerful tools for discovering errors in verifying the correctness of logic designs. Formal verification techniques, semiformal verification techniques and simulation frequently expose probabilistically uncommon scenarios that may result in a functional design failure. Additionally, formal verification techniques provide the opportunity to prove that a design is correct (i.e., that no failing scenario exists)
One commonly-used approach to formal, semiformal, and simulation analysis for applications operating on representations of circuit structures is to represent the underlying logical problem structurally (as a circuit graph),and to perform explicit or symbolic evaluation of that circuit graph.
In such an approach, a logical problem is represented structurally. Constraints are often used in verification to prune the possible input stimulus in certain states of the design. For example, a constraint may state “if the design's buffer is full, then constrain the input stimulus to prevent new transfers into the design”. Semantically, the verification tool will typically discard any states for which a constraint evaluates to a 0 (i.e., the verification tool may never produce a failing scenario showing a violation of some property of the design, if that scenario does not adhere to all the constraints for all time-steps prior to the failure). In this previous example, it would be illegal for the verification tool to produce a trace of length “i” showing a violation of some property, if that trace illustrated the scenario that the buffer was full and a new transfer was initiated into the design between time 0 and i (inclusive).
Explicit simulation-based approaches to hardware verification are scalable to very large designs, though suffer from the coverage problem that generally limits them to yielding exponentially decreasing coverage with respect to design size. Formal verification techniques overcome the coverage problem of simulation, yielding exhaustive coverage, though suffer from computational complexity that limits their application to smaller designs.
Target enlargement is a technique that has been proposed to partially leverage formal algorithms to enhance the coverage attainable with simulation. The idea of target enlargement is to use formal algorithms to enumerate the set of design states which may falsify a given property within “i” time-steps, then to use simulation to try to hit this enlarged state set instead of directly attempting to falsify the property. One primary benefit of target enlargement is that simulation need only come within “i” time-steps of falsifying the property to expose a failure, whereas without target enlargement simulation must directly falsify the property which may be exponentially less probable. For example, it could be that only one of the possible 2^N (where ^ denotes exponentiation) input vectors for a design with N inputs traverses the design closer to the failure, thus to obtain the proper “i”-step extension of a simulation run to hit the original target, it may require a specific one of (2^N)^i=2^(i*N) possible sequences of input vectors to hit the original target from the state that hits the enlarged target. This example illustrates how target enlargement effectively leverages formal algorithms to exponentially increase the coverage of simulation approaches.
Verification constraints are increasingly-pervasive constructs in the modeling of verification environments. A verification constraint is a specially-labeled gate of a design, where the semantics of the constraint are such that the verification toolset cannot produce a “j” time-step trace showing a violation of a property which evaluates any of the constraints to a 0 within those “j” time-steps. Constraints thus alter the verification task from: compute a “j”-step trace showing a violation of a property or prove that no such trace exists for any “j”, to: compute a “j”-step trace showing a violation of a property which never evaluates any constraint gate to 0 within that time-frame or prove that no such trace exists for any “j”.
Under the prior art, no solution exists for performing target enlargement in the presence of verification constraints. Traditional target enlargement approaches are not applicable to designs with verification constraints.