Modem electronic and computing systems have large numbers of subsystems, with millions of gates in each subsystem. Because of this, a minimal system simulation consisting of just a few of the subsystems results in a bloated and unmanageable verification environment. For example, modern scalable multiprocessors may have hundreds or thousands of nodes. Each node may include tens of millions of gates. Testing and verifying such a system quickly becomes difficult, impeding the design process.
Managing complexity in the verification process is not a new idea in the hardware design community. In the past, solutions have ranged from special-purpose cycle based simulators to, more recently, technologies such as formal verification (model checking and equivalence).
Contemporary multiprocessors such as the SGI Origin2000 have addressed these obstacles with innovative methodologies that include formal verification and traditional simulation. In “Origin System Design Methodology and Experience: 1M-gate ASICs and Beyond”, COMPCON-97, Eiriksson et al. describe a high-level language used to test logic whose functionality can be described in terms of packets sent to or received from the logic. The language uses two basic commands: *_inject and *_expect. The *_inject command is used to send packets to the system under test. The *_expect command is used to receive packets from the system under test. In use, the * is replaced by a label associated with a particular stub in the configuration.
In the language described by Eiriksson et al., aspects of system behavior (such as packet format and handling) are hard-coded into the language. For each *_inject and *_expect command, the designer must specify the contents of the field of each packet; the definition of what fields comprise a packet depends on the particular stub to which the command applies.
The approach described by Eiriksson et al. suffers from a number of limitations. Since behavior is bard-coded in the language (i.e., primitives such as *_inject and *_expect are written to the behavior of the specific chip) such an approach is not easily ported to a new design. In addition, designers who wish to use this approach must learn a special-purpose language. Finally, the language is not robust or expansive.
Other large-scale projects, such as the Cray T3 E and the SGI SN1 (the follow-on to SGI Origin2000) have been verified through the use of very complex test benches constructed around logic modules, typically single chips. For example, the SN1 router chip test bench was written using a hardware description language (HDL) with the addition of C code to support self-checking diagnostics. The SN1 HDL was a combination of behavioral, RTL, and structural constructs used to define the router chip test bench. Directed and random diagnostics were written using this test bench. The heterogeneous verification environment of the SN1 router chip test bench, although effective, was, however, unnecessarily complex and cumbersome.
Various commercial tools are available for extending the capability of current logic simulators to address the ASIC/IC verification problem. Specman by Verisity Design provides an automated functional test generation environment from rules embodied in the design specification. (See, e.g., “Spec-based Verification: a New Methodology for Functional Verification of Systems/ASICs,” a white paper published on the Verisity Design web page: http://www.verisity.com.)
System Science's VERA hardware verification language (HVL) provides an abstract test development environment that replaces the traditional HDL-based test benches with a special-purpose language used to describe self-checking diagnostics. (See, e.g., Mehdi Mohtashemi, “High-Performance Functional Validation,” a System Science white paper available at http://www.systems.com/products/vera/vera.htm.)
Jones and Privitera use a formal specification to automatically generate functional test vectors for Rambus designs. (See, K. D. Jones and J. P. Privitera, “The Automatic Generation of Functional Test Vectors for Rambus Designs,” Proceedings of the 33rd Annual Design Automation Conference, June 1996, p. 415-420. ) In a design formed with this approach, the formal specification completely describes the correct behavior of the device; Jones and Privitera use the formal specification to generate random and directed diagnostics using the RS language.
The RS description expresses an abstracted operational interface of the design as opposed to the logical structure of the device. Since the RS specification is not suitable for simulation it must be transformed into Verilog and simulated with a netlist of the device. Although this approach proved valuable for verifying Rambus DRAMs it is not likely to be effective for more general-purpose designs that don't exhibit a high degree of regularity.
The approaches used to-date have either been stop gap measures that quickly overcome by the increasing complexity of the systems being designed or have been special-purpose, hard-to-port, approaches with long learning curves. What is needed is a system and method for verifying logic designs which addresses the above-identified deficiencies.