This invention relates generally to semiconductor device verification, and more particularly to the automatic evaluation of semiconductor behavior within a framework defined by a test plan.
Hardware logic design is preferably tested prior to manufacture. Functional verification, such as logic simulation, is a process used to demonstrate design correctness. In other words, functional verification is performed on a hardware description language (HDL) model prior to manufacturing to ensure that a design""s implementation satisfies the requirements defined by its specification. Determining design verification completeness, however, is consistently one of the most difficult and challenging aspects of the hardware design verification process.
There are many facets of hardware behavior that are of interest to the verification process, including correct result generation, proper sequence of events and timing relationships between signals, as well as other design properties such as xe2x80x9creachabilityxe2x80x9d and arbitration xe2x80x9cfairness.xe2x80x9d
Verification Test plans. The verification test plan identifies the features of the design that are to be verified. Generally, features to be verified are interpreted and extracted from the design""s specification or requirements document, as well as being described or enumerated by the engineer based on his knowledge of the design implementation. Each feature to be verified (i.e., test item) is labeled and added to the verification test plan along with a short description of exactly what is to be verified. To ensure functional verification completeness, a unique test (or test case) must be generated for each test plan item and applied to the design model.
Verification Levels. The verification process is generally partitioned into the following levels of granularity: block-level (sometimes referred to as unit-level verification), chip-level and system-level. Block-level verification offers the best simulation performance (i.e., takes less time to verify) while system-level verification offers the worst performance. In addition, block-level verification offers the most controllability and observability over all levels of verification. Hence, many features of the design are generally easier to verify at this lower level of the design. A drawback of block-level verification is that creating a test-bench, which is required to model the block-level environment (i.e., used to drive input stimulus into the block-level design and validate output responses), might become as complex as the actual design itself. This is the case if block-level interfaces are unstable (i.e., keep changing in an ad-hoc fashion during the design process). Accordingly, some features of the design, which require higher levels of integration, cannot be verified at the block-level.
Chip-level verification, while possessing reduced controllability and observability, offers ease in test-bench generation due to its stable interface as well as higher integration of block-level components. In addition, chip-level verification has a direct relationship to the physical design facilitating chip interface and manufacturing test pattern verification. Some features of the design, which require higher levels of integration, cannot be verified at the chip-level of design.
System-level verification provides the highest level of integration, thus eliminating the need to model environmental assumptions. Conceptually all features of a design could be verified at the system-level. In practice, however, this is not practical due to reduced controllability and observability and decreased simulation performance (e.g., decrease simulation cycles per second).
Simulation Verification Strategies. To be successful, verifying large complex designs requires multiple simulation-based verification strategies, such as: directed, random and transaction-based testing. Strategy selection is determined by the characteristics and goals of a specific test item, as defined in the verification test plan.
Directed test, which usually contains self-checking code, focuses the verification process on particular functional behavior of the design or increases verification coverage on a specific test item or area of concern. Although directed test generally provides high coverage for a specific feature to be verified (i.e., test item), developing a directed test is a labor-intensive process. It is difficult to determine what additional design functionality has been verified with directed testing, aside from the specific feature targeted by a particular directed test.
Random testing with self-checking code provides a mechanism for directing the verification process toward unforeseen behavior of the design, subtle interactions, or complex sequence of multiple simultaneous events without the labor-intensive effort of generating multiple directed tests. Determining exactly what functionality has been verified through random testing is generally a problem since there is no direct coverage metrics linked back to the verification test plan.
Transaction-based verification can be efficiently employed to prove communication-based correctness when integrating block or chip-level components. Many block-level design features, such as implementation details, can be difficult to verify using transaction-based verification. Hence, determining exactly what test plan functionality is covered, unrelated to the bus transactions, is a challenge.
Formal and Semi-Formal Verification Methodologies: So-called formal verification tools include state-space exploration, static property checker and model checker tools that validate user-specified properties using static and formal techniques without test vectors. Formal tools are heavily proof driven and mathematical in approach. Semi-formal verification tools, such as amplification tools, combine traditional simulation techniques with formal state-space exploration and model checking techniques. The mathematical proofs of the formal tool approach are employed with a limited or bounded search at certain times during the verification process.
Generally, specific features (test items) defined in the verification test plan are translated to formal description language properties and proved using formal techniques. During manual review, if the properties associated with a test item are proved valid using formal verification, then the test plan""s test item is declared covered.
Completeness: Determining verification completeness, related to the design""s test plan, is currently a manual and ad-hoc process. Generally, upon successful execution of the design""s combined suite of tests (directed, random, etc.), the design team will declare that the functionality as defined in the test plan has been verified. Random simulation provides a mechanism for identifying design flaws, particularly related to unforeseen characteristics of the design or subtle interactions of complex sequences of events. Determining exactly what functionality related to the original test plan has been verified using random simulation has historically been a problem.
Existing verification methods do not directly link the verification process to the test plan in an automatic fashion. No automatic method is available by which the engineer can determine when all features of the design, as specified in the test plan, have been verified. A systematic and automatic method for evaluating test plan functional coverage is desirable and preferable to the current manual and ad-hoc evaluation process.
According to the present invention, a system and method are provided for automatic verification of a test plan for a device. The device is specified by a hardware description language (HDL) model or a formal description language model derived from the HDL model. A test plan tool and database are provided to capture test item definitions and identifications and a monitor generator is provided to generate monitors for detecting functional coverage during verification. An evaluator is provided to compare verification-events with test-plan items, to determine test item completeness and to update the test plan database.
By capturing test item events during the verification process, and evaluating the relation of these events to the original test plan test item definition, a user can automatically determine exactly what features are being verified using any or all verification strategies (e.g., directed, random as well as transaction-based simulation). Unlike other coverage techniques, which are not linked directly to the verification test plan, the method of the invention immediately facilitates determining functional verification completeness as specified by the verification test plan.
Therefore, in accordance with the method of the present invention, a method for automatically verifying the test plan of a software model of a semiconductor device includes generating monitors for characteristics of the software model that each correspond to a verification event in accordance with a test plan item stored in a test plan database, processing the software model and monitors in a modeling environment and generating verification events, and comparing the verification events and test plan items and updating the test plan database.
Further, in accordance the present invention, a system is provided for automatic verification of a test plan for a software model of a semiconductor device. The system comprises a test plan database for storing test plan items relating to the test plan, a monitor generator for generating monitors for characteristics of the software model, each monitor corresponding to a verification event according to one of the test plan items, a modeling environment for processing the software model and the monitors and generating verification events, and an evaluator for comparing the verification events and the test plan items and updating the test plan database.