1. Field of the Invention
This invention relates generally to the verification and simulation of digital circuits developed in a hardware design process, and more specifically to the use Term Rewriting System (TRS) rules in the simulation of synchronous digital circuits.
2. Background Information
Hardware Description Languages (HDLs) have been used for many years to design digital systems. Such languages employ text-based expressions to describe electronic circuits, enabling designers to design much larger and more complex systems than possible using previously known gate-level design methods. With HDLs, designers are able to use various constructs to fully describe hardware components and the interconnections between hardware components. Two popular Hardware Description Languages are Verilog, first implemented by Phil Moorby of Gateway Design Automation in 1984, and later standardized under IEEE Std. 1364 in 1995, and VHDL (Very High Speed Integrated Circuit (VHSIC) Hardware Design Language), standardized in IEEE Std. 1076. Both, these languages, and other similar languages, have been widely used to design hardware circuits.
As the complexity of digital circuits has increased, conventional HDLs such as Verilog and VHDL have increasingly shown their limitations. New HDLs based on Term Rewriting System (TRS) technology address some of the limitations of the conventional methods. A TRS employs a list of “terms” that describe hardware states, and a list of “rules” that describe hardware behavior. A “rule” captures both a state-change (an action) and the conditions under which the action can occur. Further, each rule has atomic semantics—that is, each rule executes fully without interactions with other rules. This implies that, even if multiple rules are executed on a given state, they can be considered in isolation for analysis and debugging purposes.
More formally, a Term Rewriting System has rules that consist of a predicate (a function that is logical true or false) and an action body (a description of a state transition). A rule may be written in the following form:rule r: when π(s)=>s:=δ(s)where s is the state of the system, π is the predicate, and δ is a function used to compute the next state of the system. In a strict implementation of a TRS, only one rule may execute on a given state. However, as explained further below, concurrent application of rules is desirable for efficient execution. Therefore, if several rules are applicable on a given state, some implementations may allow more than one rule to be selected to update the system. Afterwards, all rules are re-evaluated for applicability on the new state of the system and the process continues until no further rules are applicable.
While a TRS approach has been advantageously employed in the design of digital circuits, improvement in the verification (the proving or disproving of the correctness) of these digital circuits has lagged. As the complexity of digital circuits increases, verification consumes an increasingly large portion of the development process' time and resources, now often consuming as much as sixty or seventy percent of development time for a reasonably complex circuit. The development of a digital circuit typically follows an iterative flow, including a variety of stages of design and verification, such that any bottlenecks at a particular stage may typically set back the entire development project's completion.
FIG. 1 is a flow diagram of an exemplary series of steps in the development of a typical digital circuit. A typical development cycle begins with the creation of a design specification (step 110), an outline of the design that describes abstractly the functionality, interface, and overall architecture of the digital circuit. At this stage, the precise details of the implementation are not yet considered. Next, a behavioral description (step 120) may be created to aid in analyzing functionality, performance, compliance with standards, and other high level design issues. Such a behavioral description may be created in Verilog or VHDL, or may be implemented in a more specialized language such as SystemC, an open-source kernel that extends the C++ language, and enables hardware design. Such a behavioral description is then typically converted, to a Register-Transfer Language (RTL) description (step 130) in which a circuit is characterized by the values in registers at particular clock cycles. In an RTL description, a digital circuit may be abstracted to a series of interconnected finite state machines (FSMs) that encompass the circuit's functionality. Such an RTL description may be created in Verilog or VHDL, or another suitable language.
Next, functional verification (step 140) is typically performed on the RTL description. Functional verification is a key step in the development processes, where desired functionality is checked and most functional bugs are located and corrected through modification of the RTL code (see prior step 130). During functional verification, a combination of direct and random tests are typically employed on a simulation of the digital circuit. Such a simulation generally loads HDL code and simulates its behavior in a software environment adapted for testing and analysis. As digital circuit designs become increasingly complicated, the computations necessary for simulation have become a problematic and time-consuming issue in the verification process.
After functional verification, logic synthesis tools are typically employed (step 150) to convert the RTL description (of step 130) to a gate-level netlist (step 160), which is a description of the circuit in terms of gates and connections between them. Logic synthesis tools (from step 150) generally attempt to produce a gate-level netlist that meets timing, area, power and other specifications of the design specification. Such specification factors may be checked through logical verification (step 170). Results generated in logical verification may be compared with results obtained during functional verification (see step 140) to ensure correctness of operation. Again, if errors are found, RTL description (from step 130) may be altered and the sequence repeated. After successful logical verification (see step 170), a physical layout of the digital circuit showing the position of gates and connecting traces is typically created (step 180) with a Place and Route tool. Such a layout is typically subject to layout verification (step 190) and, if any issues are detected, the physical layout (from step 180) may be appropriately modified. Once this verification is complete, the device may be fabricated onto a chip to produce a finished hardware device in step 195.
As noted above, the functional verification and simulation stage is a key stage in the hardware development process. In more detail, functional verification typically begins with the creation of a functional test plan, a fundamental framework for the testing of the digital circuit. Based on this test plan, various routines adapted to test specific functionality of the circuit are developed. These test routines are designed to be applied to a simulation of the digital circuit, often referred to as the design-under-test (DUT). Commonly, a High-Level Verification Language (HVL), such as VERA developed by Synopsys, Inc., is employed to aid in writing test routines and in creating a test environment around the DUT that facilitates testing. HVLs typically combine object-oriented programming approaches with parallelism and timing constructs and thus are well suited for verification. HVLs may be further employed to create input drivers, output drivers, data checkers, protocol checkers, coverage analysis testers, and other devices useful in the verification process.
The test environment interacts though an interface with the simulation of the DUT. Generally, simulators are classified into three basic types, based upon the manner in which they perform simulation. Interpretive simulators, such as Verilog-X, available from Cadence Design Systems Inc, operate by reading in an HDL design, creating data structures in memory, and running a simulation interpretively. Interpretive simulators are characterized by their compiling of HDL code each and every time the simulation is run.
Compiled-code simulators, such as VCS available from Synopsys, Inc, operate by reading in an HDL design and converting it to a programming language, such as C. This code is then compiled by a standard compiler to produce a binary executable that may be executed to run the simulation. Compile time may be lengthy for compiled code simulators, but in general, execution speed is faster than possible with interpretive simulators.
Finally, native-compiled-code simulators, such as Verilog-NC available from Cadence Design Systems Inc., operate by reading in an HDL design and converting it directly to binary code for a specific machine platform. Compilation is optimized specifically for this platform, making the simulation machine specific. Due to the machine specific optimizations, native-compiled-code simulators can yield significant performance benefits compared to other types of simulators.
Regardless of their type, HDL simulators employ a simulation strategy to simulate design elements. The simplest simulation strategy is termed “oblivious,” or alternately “exhaustive,” simulation. In oblivious simulation, the simulator processes and updates state values of all elements (modules) in the design, irrespective of changes in signals. That is, the state value of each module is updated every time step (clock cycle), regardless of whether there is activity, or a change in the state of the system that affects the particular module. Computing all state values for all clock cycles is typically redundant and generally consumes unnecessary computing resources. Indeed, an oblivious simulator may perform quite inefficiently when a module is inactive, i.e. does not change state, for many clock cycles.
In an attempt to address this inefficiency, various schemes have been developed for reducing the amount of computation necessary for simulation, including schemes for reordering, rewriting, queuing, deferring, or otherwise systematically evaluating a subset of the system's state values. One of the most prevalent approaches to reducing computation is termed “event-driven” simulation. Event-driven simulation is characterized by the computation of state values for modules only when signals at the inputs of these modules change (herein termed an “event”). Accordingly, FIG. 2 depicts a generalized block diagram of an event-driven simulator 200 for the simulation of an HDL design according to a prior art implementation. In an event-driven simulator, a series of modules 230, 232 and 234 are generally interconnected to, and intercommunicate with, a simulation core 210 that manages the simulation process. A first software module 230 is executed in response to an initial event to begin the simulation cycle. This module, in turn, generates further events which are delivered to the simulation core 210 for transfer to other modules 232, 234. The simulation core 210 typically maintains an event-response table 220 to determine where events from a given module are to be transferred. For example, as illustrated in FIG. 2, a first event, event(1), may cause a module 230 to generate a second event, event(2), which is routed by the simulation core 210 to two additional modules 232 and 234. Such a process of events-causing-further-events may continue for many levels in complex designs.
When an event is delivered to a module 230, 232 and 234, a child process, 240, 242 and 244 is called to execute the code in the module. A current state 250, 252 and 254 for the module 230, 232, 234 is also formed, representing an event triggered by the child process 240, 242, 244 and the current state of any global variables used as inputs to the module. The child processes 240, 242, 244 typically execute under the control of an operating system until execution is completed. Typically, an event-driven simulator operates within a multi-threading operating system, with each child process 240, 242, 244 representing a separate thread of execution, all of which threads execute contemporaneously within the processor's runtime operation.
While event-driven simulation offers significant improvements over oblivious simulation, the approach still contains notable inefficiencies. In event-driven simulation, when an event is processed, all dependant data is updated, through the triggering of additional events, to determine a new global state of the system. Values for data that will not control the next state transition of the system, and values for data which is transient, i.e. that will not be stored due to subsequent logic control, are still updated, and cause events to be triggered, even though these values are not used. For example, consider the effects of event-driven simulation on a hypothetical digital circuit design which include the following HDL pseudo code:
reg [3:0] q;
reg [3:0] e;
wire [3:0] y=q+5;
always @(posedge clock) begin                q<=z+2;        end        
always @((posedge clock) begin                if (controlsignal) begin                    e<=y;                        end        else        begin                    e<=5;                        end        end        
In the above hypothetical example, the value of y depends on the value of q, which in turn depends on the value of z. Hence, in an event-driven simulator, a change in the value of z would trigger events causing the value of y and q to be updated by child processes. Yet the values of y and q are only needed when they control the next state transition, in this example, only when the Boolean value controlsignal is logical true. Thus when controlsignal is logical false, needless computation is performed by an event-driven simulator to update values of y and q that are never used.
Again, as digital circuit designs become increasingly complex, simulation emerges as a primary computational bottleneck in the overall hardware development process, consuming unacceptable processing time. The inefficiencies associated with computing redundant or transient values in an oblivious or an event-driven simulator merely compound the already-computationally intensive stage of simulation. It would be desirable for such an HDL simulator to function in a manner that would eliminate computation of unnecessary values that do not affect the current state of the system. An improved simulator should address this inefficiency, while not requiring an inordinate amount of computation in other areas, so that an overall performance gain may be realized.