An ongoing problem in the design of large systems is verifying that the system will indeed behave in the manner as intended by its designers. One approach has been to simply try out the system, either by building and testing the system itself or by building and testing a model of the system. In recent years, those skilled in the art have gravitated toward the approach of building and testing a model of the system through software. That is, the approach has been to form a computer simulation of the system, i.e. a computer program which is a model of the system, and executing the computer program to test the functionality or properties of the system.
In testing a design (i.e. hardware and/or software) in the course of development, those skilled in the art have classically created a model of the hardware or software, and run the model through a number of scenarios, wherein each scenario focuses on a functional aspect of the hardware or software design. Since a single scenario is rarely sufficient to test a given function of a design, a number of related scenarios are tested for each aspect or function of the design to determine whether the function is correctly implemented in the hardware or software. Together, the group of related scenarios is called a test suite. Thus, each function of a design is tested through a test suite which consists of several scenarios.
Running a single test suite can be an involved process, requiring set-up, running the scenarios, and evaluating the results. Although a test suite having a single scenario can be performed quickly, a test suite having many scenarios can be very time consuming and thus costly to run. Since many hardware and software designs require a test suite that contains many scenarios designed to check a very large number of possible variations of a "basic" scenario, present-day verification tools can be quite time-consuming, and thus costly for use in checking the behavior of a design.
To illustrate, many commercial hardware and software designs have roughly N independent functions wherein 10.ltoreq.N.ltoreq.100. For each function N, a test suite is designed to check a given implementation (i.e. a system model). Assuming that the tests are run consecutively, then at some point during testing the given implementation may fail. That is, in testing a given implementation of a system design, the i-th test may find a system error or design problem. At that point, the source of the problem is located and the implementation is changed so as to fix the problem. Once the implementation is changed, however, the issue then becomes whether the fix or change of the implementation necessitates the re-testing of all previously tested test suites. That is, does an adjustment to fix one problem during the i-th test adversely affect the implementation with respect to another function or property of the design. If the fix or adjustment causes a break in the functionality of any of the previous (i-1) tests, then the function of that previously tested property must be re-checked.
One common method of determining whether an adjustment to the implementation resulting from testing one scenario has affected the function of a previously tested aspect of the system design is called regression testing. In regression testing, when the i-th test causes a change to the given implementation being tested, the previous (i-1) tests are re-run to insure that the previously-checked property still behaves as expected. Although fixing one problem does not always cause a problem or break with another property or function of the design, almost no system implementation behaves as expected the first time around. As a result, it can be argued that one can expect on the order of N.sup.2 regression tests for an N-function or N-property design. Thus, for a design having N=50 functions, regression testing can be very time consuming and quite time consuming and costly.
Another method for testing a hardware and/or software design is formal verification. In formal verification, the designer provides a logical definition of the intended behavior of the design or system, and a logical design of the implementation of the design, to a formal verification system. The formal verification system then determines whether the logical definition of the system's intended behavior implies the logical definition of the implementation. That is, the formal verification system determines whether the implementation can perform the functions or tasks it is intended to perform, as defined by the system specification.
When the formal verification system finds a function or task which the implementation can not perform, the logical definition of the implementation must be adjusted. And, when such an adjustment is made, the system is faced with the same methodological problem associated with testing, as described above. That is, the verification system must re-verify all previously verified tasks to insure that the adjustment did not affect the behavior of the implementation for those tasks or functions.
For example, for regression verification (the analog of regression testing, when verification is used in place of testing), re-verifying the behavior of a property or function of the system, after a change is made, can be extremely or even prohibitively costly. This is due to the fact that a single verification run may take many hours or even days. To illustrate, for N=50, even if each verification run takes only one hour, regression verification could take in excess of 100 cpu days. As one can imagine, such a demand on computer resources may lie outside the limit of feasibility for many projects. Accordingly, there is a need to reduce the time and cost associated with such verification systems when testing an implementation of a design.