Prior to manufacture of an integrated circuit, designers verify the functionality of their designs (referred to herein as the “design under verification”, or “DUV”). The DUV is usually provided in the form of a netlist description of the design. The netlist may have been derived from many sources, including from a hardware description language. A netlist description (or “netlist”, as it is referred to by those of ordinary skill in the art) is a description of the circuit's components and electrical interconnections between the components. The components include all those circuit elements necessary for implementing a logic circuit, such as combinational logic (e.g., gates) and sequential logic (e.g., flip-flops and latches).
Design verification is performed because fabricating an integrated circuit is expensive and takes time. If the circuit design contains functional errors, the design will have to be corrected and then re-fabricated. Thus, many different platforms for functional verification of integrated circuits have been developed. Hardware-based design verification systems such as logic emulation systems are known devices that implement a user's design in a plurality of programmable integrated circuits. Such logic emulation systems are available from various vendors, including Cadence Design Systems, Inc., San Jose, Calif., United States of America, and others. Typical emulation systems utilize either programmable logic chips or processor chips which are programmably interconnected. In processor-based emulation systems, the DUV is processed so that its functionality appears to be created in the processors by calculating the outputs of the design. The logic itself is not implemented in a processor-based emulation system. Examples of hardware logic emulation systems using processor chips can be seen in, e.g., U.S. Pat. Nos. 5,551,013, 6,035,117 and 6,051,030. U.S. Pat. Nos. 5,551,013, 6,035,117 and 6,051,030 are incorporated herein by reference.
Another type of design verification system is known as a software simulator. Software simulators differ from hardware verification solutions in that the simulation software is executed in general purpose computers rather than in specialized hardware like an emulation system. Simulation is a software based approach, in which the DUV (or a portion thereof) and testbench are compiled into machine executable model and executed in workstation or PC. A testbench is a series of test vectors (i.e., stimulus) that are used to stimulate a design and may include modules that receive the outputs from the DUV after the vectors are run in the DUV. Since most electronic designs are presently designed using hardware description languages (“HDL”) such as Verilog, a testbench is typically comprised of some HDL code.
Hardware-based verification systems and software-based verification systems each have benefits that the other lack. For example, software-based verification systems allow for very accurate verification using the exact timing (i.e., clocking) that the actual DUV will encounter when installed in a real electronic system. Software simulators allow a user to see the result of every logic operation and can graphically represent the signal transitions from high to low or from low to high on a computer monitor. While hardware-based verifications systems do not normally have this ability, hardware-based systems are orders of magnitude faster than software-based systems and therefore provide very fast verification.
In order to debug a DUV, the designer needs to look into activities of design signals over time. The reason for this is that digital circuits are driven by one or more clocks, and errors can occur at various transitions of the clocks driving the DUV. The designer faces at least two issues when evaluating design signals over time. One issue is which signal to observe (i.e., which node in the DUV to observe). A second issue is when to observe the signals (i.e., at what clock transition and/or which confluence of events—sometimes referred to as a trigger). These two issues impose serious challenges to simulation and emulation tools. First, circuit designs are typically very large (e.g., in the order of million gates). Second, the number of signals the designer would like to observe is proportionally large. Third, since the time window in which design signals need to be observed (referred to herein as the “trace window”) is hard to predict prior to simulation or emulation, the designer who is debugging a design would prefer the trace window to be as large as possible.
In order to handle these issues, circuit designers have used various approaches. One such approach is to run the DUV lockstep in a simulator. With this approach, progress of simulation is controlled by the designer in interactive operation. Designers can run simulation, stop and observe signals, continue, and repeat the process. When simulation stops, designers can check the state of any signal in the design. A second approach is to perform free running simulation with signal dump. With the “free running” approach, simulation is executed freely without user intervention, and signals to be observed during simulation are dumped out during simulation. It is important to note that the signals to be dumped out must be specified before the simulation starts. These simulation approaches, while effective, are very slow. A third approach is to emulate the DUV using an emulator that allows full visibility for a fixed size of trace window. In this approach, the emulator is running freely, and signals generated by the DUV in the emulator that allow for full vision are saved for a certain period of time. A final approach is to emulate the DUV with an emulator that provides for limited visibility and replay. With this approach, limited information is saved during emulation. Designers might need to run emulation a few times in order to get sufficient information for analysis.
As discussed, one way of using a hardware-based verification system is in conjunction with a software simulator. This is sometimes referred to as simulation acceleration. Because emulators operate at speeds that can be orders of magnitude faster than simulators, emulation systems, and in particular, processor-based emulation systems, contain vast amounts of information about the state and activity in the emulated circuit. The reason for this is as follows. Simulators allow designers to view the state of a signal as it exists at a specific node in the DUV at a specific time (i.e., clock cycle) immediately after a single cycle of a simulation. A simulation cycle is the amount of time (and thus the state changes that take place) during one step of the fastest clock driving the DUV. In other words, a simulation cycle is the time of interest during a verification operation because any signal transitions that take place during simulation cycle are typically not relevant. The only signal transitions that generally are important in a DUV are the signal transitions that exist at the end of one simulation cycle.
In simulation environments, the stimulus that drive the DUV during a simulation cycle are modeled on the workstation. In contrast, in simulation acceleration environments, the stimulus driving the DUV are sent to the emulator that is emulating the DUV, and the response is then sent back to the simulator. Because emulators run the DUVs at clock speeds that are dramatically faster than simulators, simulation acceleration can dramatically decrease the amount of time it takes to verify a DUV.
Assertions have become a popular mechanism to help designer identify the cause of errors in DUVs. An assertion is a statement of a property that can describe the assumed behavior of inputs or the expected behavior of internal signals or outputs of a design. Adding assertions to a design has many benefits, from documenting the intended behavior to aiding in the verification of the design. They can be used in a simulation or emulation environment to flag erroneous behavior, whether it is invalid stimulus or incorrect behavior of any signal in the design. Customers want to be able to develop and use the assertions in their simulation environment, and transition both the assertions and the design to the simulation acceleration and emulation environments.
Assertions throughout the design help to identify the cause of the error as soon as it occurs, close to the source. This can be especially important in emulation environments where capturing waveforms is generally limited to some predefined window of time. The size of the window impacts memory requirements and upload times. If you don't capture enough, there is a risk that the information needed for de-bugging will not be available, so capturing bugs close to their source improves the likelihood of being able to debug it without having to repeat the run. Assertions are also important to help debug failures without having to repeat a run since emulation runs can take days or even weeks to reproduce. Improving the debug capability makes the overall verification time much faster.
Two environments are considered in developing assertion reporting solutions. When the emulator can be stopped in the middle of the run, such that this environment applies to emulation environments with no targets or static targets. (Here a target is thought of as real hardware that is outside of the emulator). The second environment exists when the emulator cannot be stopped in the middle of a run. The environment that does not tolerate stopping the emulator clock applies to emulation environments that use dynamic targets. With dynamic targets, the clock to the system cannot be stopped either because:                a. the clock is generated in the target and not controlled by the emulator        b. The clock is generated in the emulator, but the target needs a constant clock to maintain integrity. For example, dynamic memories that require the clock for refresh can become corrupted, thus disrupting the integrity of the system. The interrupt style approach of processing assertion failures will not work in environments that use dynamic targets.        
One existing solution, uses system tasks like $display and experiences a performance degradation. There are many issues with using system tasks:    1. Not all emulators support system tasks, and those that do have limitations. In most cases string data is not supported. Environments with dynamic targets that cannot function correctly if the clock is stopped will not be able to support system tasks.    2. Performance is impacted and possibly capacity depending on the vendor implementation. The system task is executed in succession for each assertion that fires, resulting in slow performance. There is an even bigger performance and capacity impact when non-synthesizable constructs are displayed, like $time and strings.    3. The information provided for debug when an assertion fires is limited. System tasks provide data for a single point in time, which is usually insufficient for debug. Waveforms are needed because they provide historical data. Waveform upload is done from a system prompt, whereas the system task is usually in the HDL code, and the two are not linked. Thus, automatic waveform generation cannot be done with system tasks. This is not an issue in a purely simulation environment because waveforms are captured continuously, but is unsuitable in an emulation environment where waveforms must be uploaded on demand and cover some window in time, depending on the memory allocated.    4. The automatic capture of waveform information is not linked to the firing of an assertion.
Another existing solution provides assertion instrumentation with an OR gate. This solution provides a hardware status bit for each assertion and logically ORs all the status bits to produce an interrupt-like signal that indicates that an assertion fired (e.g. a failure was detected). When the interrupt-like signal is asserted, the emulator stops to report which assertions fired. A mapping file exists that associates with each assertion the fixed debug information, such as the filename, line number of the original assertion, and severity. However, the following issues exist with using an OR gate:    1. This is the slowest configuration possible because you cannot configure the number of accesses versus the bits per access to optimize for your hardware. Once an assertion firing is recognized, software will have to suspend the run (which stops the clock) and poll each assertion state signal to report which assertion(s) fired. A linear search is performed through the assertion fired bits (e.g assertion #1 is checked to see if it has fired by querying its hardware state bit, then assertion #2 is checked to see if it has fired, then assertion #3 is checked to see if it has fired . . . ) Each access requires information to be transferred from the emulator to the workstation.    2. The synthesis of the OR-gate can not be controlled, making it possible that the assertion instrumentation will be a critical path that limits runtime speeds.    3. When an assertion fires, the name of the assertion, the time or cycle number, the line of code and the filename are reported. But it is difficult to know when to capture waveforms and what signals to capture.
When dynamic targets are used, the clock to the system cannot be stopped making this solution unworkable. A major issue with transitioning assertions from simulation to emulation is the reporting capabilities that are supported across the different environments, and their effects on capacity and performance. The problem with existing approaches is that they make use of a reporting mechanism that requires extensive interaction with a workstation. This is detrimental to performance for simulation acceleration and emulation environments, and can result in the loss of information when dynamic targets are used.