The design and development of complex systems almost always starts with the definition of system requirements. A requirement is a documented characteristic that a system, product, or service must possess, or a documented function that a system, product, or service must perform. A requirement identifies an attribute, capability, characteristic, or quality that was determined to be necessary for a system to have value and utility for a user. In the classical engineering approach, sets of requirements are used as inputs into the design stages of product development.
Complex systems such as aircraft, automobiles, medical devices, or mobile phones may have thousands of requirements that describe various aspects of desired system function and behavior.
Requirements are often described as “shall” statements that describe the intended behavior or function of the system, including numerical limits or target values, context of operation, and use conditions. For instance, a performance requirement for a sports car may be:                The vehicle shall accelerate from a standstill to 100 km/h in 4 seconds or less on dry hard surface roads.        
A performance requirement for a truck may be:                The cold engine shall start after no more than 15 seconds of cranking at ambient temperatures between −30 degrees Celsius and −15 degrees Celsius and at sea level pressure.        
As a final example, a performance requirement for a rocket may be:                The rocket shall be capable of carrying a payload of 200 kg to an orbit of between 400 to 600 km and between 40% and 50% inclination.        
Another important aspect of system engineering and engineering design is verification. The verification process ascertains that the designed system meets a set of initial design requirements, specifications, and regulations. Thus, requirements are an important input into the verification process. In fact, all requirements should be verifiable. The most common verification method is by test. Other verification methods include analysis, demonstration, simulation, or manual inspection of the design.
In traditional system engineering, verification by test is typically performed by building prototypes and developing test programs to verify specific requirements using the prototypes. In the case of the sports car mentioned above, the gold standard for testing the acceleration requirement would be to build a prototype, take the prototype on to a representative test track, find a skilled driver to drive the car around the track, and observe whether the car meets the acceleration requirement. In order to save time and cost, it is customary to bundle multiple verification activities in a single test program. In the example of the sports car, multiple requirements such as acceleration, top speed, and braking distance can be verified using the same test track setup.
The cold start time requirement for the truck may be tested by building a prototype and taking it to a cold climate (or a special test chamber) where the environmental conditions specified in the requirement may be replicated.
Finally, the rocket could be tested by building a prototype and launching a representative payload to space.
There are several undesirable aspects of using prototypes for verification by test. Prototypes are costly to design and build. If the test fails, a new prototype needs to be designed, built, and tested again. Also, if a requirement is modified, the tests have to be repeated. For instance, each rocket launch may cost tens of millions of dollars, resulting in a very expensive test program. Further, depending on the test, it may or may not be possible to repeat the test under different circumstances in order to understand the limits of performance. For instance, the prototype rocket mentioned in the example above would be spent during the test, making it cost-prohibitive to run multiple tests with different size payloads or various orbits.
Testing a prototype is often not an optimum verification method because the tests are difficult and expensive to set up and execute. For instance, the truck cold start time requirement necessitates extremely cold temperatures. While an environmental chamber is a convenient setup for such a test, it may be impossible to find an environmental chamber large enough to accommodate the prototype truck. The only option may be to haul the truck and all associated test hardware to a cold climate for winter testing, adding further cost and time delays to the test program. Also, tests might have health or safety consequences. Weapons or rocket testing is an obvious example (for instance, the Nedelin disaster in Russia in 1960 resulted in many deaths). Other examples include testing of biomedical devices (e.g., pacemakers or insulin pumps) on animals or humans, or crash testing of aircraft.
Use of computer models to analyze or predict system behavior or performance is well known in the art. In fact, the entire field of Computer-Aided Engineering (CAE) is dedicated to analyzing system behavior using mathematical models of physical systems. Several modeling languages, tools, and environments have been developed for this purpose, including Matlab™ and Simulink™ (from MathWorks), Modelica™ (an open language sponsored by the Modelica Association), ANSYS™, ADAMS™ (from MSC Software), Simulia™ (from Dassault Systemes), and others.
An alternative to using hardware prototypes for verification by test is to run simulations using virtual prototypes. A virtual prototype is a computer model of a system design that emulates the function, behavior, and structure of a physical instantiation of that design. Using a virtual prototype, one could verify several requirements which would normally necessitate expensive hardware prototypes and complex test programs.
In the academic literature, there are a few examples of using virtual prototypes instead of actual prototypes to verify performance requirements. In the automotive industry, virtual crash tests are often used to verify certain passenger safety requirements. Indeed, there are several advantages to using virtual prototypes. Experiments conducted using virtual prototypes are inherently safe. Virtual prototypes can emulate the structure, behavior, and function of a hardware prototype, obviating the need to build test hardware or test rigs. Virtual prototypes can be used for destructive testing at no additional cost. Unlike hardware prototypes that are destroyed during testing, virtual prototypes may be created and tested over and over at will. Virtual prototypes may be used to study the limits of performance. For instance, it may be valuable to test the acceleration performance of the sports car under various road, environmental, load, and driver conditions. Using a virtual prototype, many tests can be run under varying conditions, providing a more robust prediction of the eventual performance of the system in the real world. Indeed, there are recent developments to estimate the probability of correctness of a given design by simulating the same design many times and varying certain model parameters and input variables.
However, there are several limitations in current methods that dramatically limit the use of virtual prototypes to verify requirements. Existing approaches require manual configuration of tests for each requirement, including selection of the most appropriate simulation method, composition of models, setting execution conditions, determining a proper sequence of simulations, and defining success criteria for the test. There is currently no method for identifying the most appropriate mathematical model to use, in order to verify a given requirement. There is no method for identifying and modeling the most appropriate context (environment) for a given test. There is currently no method for selecting the most appropriate external stimuli (such as user inputs or external events) for a given test (if applicable). There is currently no method for identifying the most appropriate probes/measurement points into a simulation to verify a given requirement. And there is currently no method for creating and optimizing tests to verify multiple requirements concurrently. In the example of the sports car, the multiple requirements of acceleration, top speed, and braking distance can be verified using the same test track setup. However, currently a human must either manually design three tests (one for each requirement) or manually design a single test that will verify the three requirements.
FIG. 1 is a flow diagram illustrating a prior art method 10 for testing a given requirement. The user chooses a requirement to be tested at 12. The user also chooses a system model at 14. Typically system models already exist and the user intelligently finds the correct one for testing the given requirement. The user then manually creates a test bench (16). This involves writing instructions in a predetermined language, or essentially, writing a small software program or script for execution by a processor. At 18, the user runs a simulation by executing the test bench program. At 20, the user manually interprets the results, which includes determining whether the test bench was appropriate to test the requirement as well as whether the requirement was met. If the test bench did not succeed in testing the requirement, as shown at 22, the user must go back to 16 and create a new test bench that hopefully avoids the problems that caused the previous test bench to fail.
In the current literature, there are examples of how such a requirement may be verified using a virtual prototype. However, that literature does not describe how to generate the test without user intervention. For instance, Model-Based Requirement Verification: A Case Study (Feng Liang, et al. 9th International Modelica™ Conference Proceedings, September 2012, Munich) requires user intervention to “select or create design model to be verified against requirements” and to “select or create verification scenarios.” The method is silent as to how the context (environment model, such as road or atmospheric conditions) or the external stimuli (such as driver inputs) might be selected and incorporated into the test. Model-Based Systems Engineering for the Design and Development of Complex Aerospace Systems (Serdar Uckun, et al. SAE International, October 2011) discusses how to measure the impact of design uncertainty in verification results, but it does not teach the integration of requirements with the test benches in an automated fashion. Methods (1) and (2) both assume that the verification will be performed using the Modelica™ language, and are both silent as to how to select the best simulation method if it is not feasible to use Modelica™.