1. Field of the Invention
The present invention relates to the field of the verification of digital hardware.
2. Brief Description of the Related Art
Functional Hardware Verification
The verification of digital hardware is a step in the production of integrated circuits. If the verification of the digital hardware fails to remove all bugs in the digital hardware, a production process for the digital hardware, with its high fixed costs, may need to be restarted and the introduction of a product incorporating the digital hardware into the marketplace will suffer a delay.
One step in the design process of digital hardware is functional verification, by which the initial description of a circuit of the digital hardware is checked to see whether the circuit always behaves as intended. A description of the circuit is given as an RTL description in some hardware description language (e.g. VHDL, Verilog, or SystemVerilog). Current verification methods typically do not identify all of the functional bugs in a design of the circuit. The reasons why the functional bugs remain undetected can be classified as follows:                Unstimulated Bugs: These functional bugs are not found because the stimuli applied to the design of the circuit fail to exercise and propagate the unstimulated bugs to the inputs of a checker of the circuit.        Overlooked Bugs These functional bugs are stimulated and propagated to the inputs of the checker but the checker is not designed to identify the overlooked bugs.        Falsely Accepted Bugs: These functional bugs arise from a consistent misinterpretation of a specification of the design of the circuit by the implementers and the verification engineers.Verification by Simulation        
The work horse for functional verification is simulation. Simulation-based verification methods are prone to all three classes of the undetected bugs. Simulation fails to stimulate all of the functional bugs because of the factor of 106 or more between the simulator speed and the real-time execution of the circuit under test. Consequently, the simulation cannot deploy all of the stimuli necessary to exhaustively verify the design of the circuit in the project time available. Simulation coverage metrics do not relieve this situation—the simulation coverage metrics can only assist in the allocation of restricted verification capacity across the design of the circuit.
The problem of the overlooked bugs is therefore generally handled by verification planning. Verification tasks are identified by examination of the specification of the circuit and the architecture, by relating common design patterns to appropriate assertions, or by asking the designers of the circuit to note particularly important relations between signals in the circuit. The completeness of the resulting verification tasks is typically compromised by human error. Therefore, practitioners keep this verification planning “dynamic” throughout the verification phase in order to capture new insight into unmet verification needs.
Formal Functional Verification
Formal verification is regarded as an alternative to simulation-based verification. In formal functional verification, so-called properties are proven against the design of the circuit to ensure proper operation of the RTL description. The formal verification uses methods of mathematical proof and therefore acts as though the circuit has been stimulated with all possible input stimuli. See, Browne, Clarke, Dill, Mishra: “Automatic Verification of Sequential Circuits Using Temporal Logic.” Therefore, in the terms of the classification above, the formal verification leads to an avoidance of unstimulated bugs in the design after the formal verification has been completed.
Formal verification has recently been complemented by an approach to ensure that a set of the properties precisely examines the entire input/output behaviour of the circuit. The set of properties is then termed “complete”. In the terms of the classification above it avoids overlooked bugs in the design after the verification.
Formal Equivalence Verification
Besides formal functional verification, there is formal equivalence verification. The goal of the formal equivalence verification is to verify process steps of the design after the RTL description has been designed and verified. The verification of these design process steps requires the comparison of the circuit description before and after the design process step. For example, to verify a synthesis step, the RTL description is formally compared with a synthesized net list. Due to its simple user interface, equivalence checking today is the most widely used approach for comparing two descriptions of the same design.
Algorithms used in the formal equivalence verification compare two descriptions of the same design by extracting an automaton from each of these descriptions, by identifying pairs of corresponding input bits, output bits, and state bits in the two automata, and then comparing the next state and output functions of corresponding state and output bits in the two descriptions. This is referred to as combinational equivalence verification. Combinational equivalence verification is only applicable if both circuit representations have the same state encoding.
Processor Verification
General Task
Processors are typically developed such that programmers writing assembler code to be executed by the processor do not need to understand a hardware description of the processor in detail. Instead, it suffices for the programmer to view the processor as though one instruction had been fully executed before the processor begins with the execution of the next instruction. This model of the processor is called the architecture or architecture description of the processor and will be described in more detail below.
For reasons of efficiency, the processors are implemented in a way that the processors execute multiple instructions simultaneously, e.g. in a pipeline. This requires that sequentializing mechanisms are designed which make pipeline effects invisible to the user or which secure efficient operation of the processor. Such sequentializing mechanisms are for example forwarding, stalling, or speculative execution and will be described below. These sequentializing mechanisms are represented in the RTL description of the processor. The RTL description will be referred to as the implementation description below.
The verification problem for the processors is to show that the implementation indeed executes the programs in the way that the architecture suggests. This verification problem is a functional verification task, as the functional verification task verifies the RTL description including the sequentializing mechanisms. However, this functional verification task can also be viewed as an equivalence verification task between two descriptions of the same circuit, namely the architecture description and the implementation description. Still, this equivalence verification task goes far beyond the approaches of the equivalence verification tasks currently known. The reason for this is that the design step that turns the architecture description of the processor into the implementation description involves human creativity and the introduction of elaborate mechanisms such as pipelining, forwarding, speculative execution of instructions, or stalling. In particular, architecture description and implementation description of the processor differ in the timing of the circuit. The time difference between the completion of one instruction in the implementation description of the processor and the next instruction in the same processor can vary widely. In superscalar processors, the execution of one instruction may even overtake the execution of other instructions in the superscalar processor, such that the order of completion of the instruction execution is different from the sequence of instructions of the program. The detailed temporal relation between the architecture description and the implementation description is typically not important to the programmers. They are interested in a gross average throughput of instructions when writing their programs.
The equivalence verification between the implementation description and the architecture description of a processor is exacerbated by interrupts. An interrupt arrives at the processor when the processor receives appropriate values on an input signal. Depending on an internal state of the processor, the processor decides if it accepts the interrupt or not. Upon acceptance of an interrupt, the processor will execute the interrupt. This interrupt execution typically replaces the execution of an instruction, the execution of which was already started by the processor. Part of the interrupt execution is to switch to the execution of another part of the program, the interrupt service routine.
During implementation, a decision is made by the designers regarding which of the instructions that the processor executes when an interrupt arrives should be replaced by the execution of the interrupt. This decision must be accounted for during the equivalence verification.
Processor Verification
In industrial processor verification, the general idea for simulation-based verification is to make both the implementation description and the architecture description execute the same program and then to compare the traffic in the communication between the processor and the data memory in both the implementation description and the architecture description. This approach executes the processor verification by examining the implementation and the architecture based on what is observable at the respective interfaces. Bugs are found when the traffic to the memory of the implementation and the architecture deviate from one another.
Programs used to verify the processor are fed into the architecture description and the implementation description. The programs are either specially developed, randomly generated, or derived from application programs, e.g. the booting of an operation system.
A problem arising from this so-called “black box” approach is related to interrupts. The comparison requires that the interrupts arrive at the implementation description and the architecture description at corresponding points in time. The exact correspondence between the interrupts in the implementation description and the architecture description is often manually provided by a verification engineers which is tedious and error prone.
Often, the simulation-based verification not only examines the processor through its interface signals, but also checks that properties about the relation of internal signals hold. These internal signal properties are temporal logic expressions that are expected to be satisfied for every clock cycle of the implementation and are commonly referred to as assertions. The verification approach using the assertions is termed Assertion Based Verification. The assertions are often provided by the design engineers who develop the implementation description. If the properties are not satisfied, the simulation issues an error or warning message which allows the designer to identify a bug long before the bug becomes observable at the interface signals of the processor.
Once a certain level of confidence in the implementation of the processor is reached, self testing programs are also applied which calculate certain results using two different sequences of instructions and compare these certain results.
However, as discussed earlier, simulation-based approaches suffer from the risk of undetected bugs, either because the undetected bugs were not stimulated or have been overlooked.
Most application of formal verification concentrates on the formal examination of properties which in principle identifies all contradictions to the properties. However formal verification does not account for the underlying problem that the properties may overlook bugs in the sense of the above bug classification.
The application of formal verification to processors has already been studied in academia. Burch and Dill developed an idea of control path verification, i.e. verification of those parts in simple pipelines that decide upon how the processor combines which data. See Jerry R. Burch, David L. Dill: Automatic verification of Pipelined Microprocessor Control. CAV 1994: 68-80. However, Burch and Dill did not consider data paths at all, i.e. those parts that actually transfer or combine the data depending on the signals from the control path. The present invention allows for the verification of the entire processor, including the control paths and the data paths. Several extensions to super-scalar or out-of-order processors have been developed. However, the approaches described in these papers only focus on specific parts of specific designs, i.e., they do not offer a complete verification of the implementation description against the architecture description. In addition, automation is typically low and there is no integration with efficient debug environments, as provided in the invention.
The most advanced approach to processor verification is based on the completeness approach. To this end, properties that capture the architecture description must be written. These properties are to be proven against the design. It must be shown using the completeness checker that the properties do not overlook bugs. This ensures that upon complete formal verification, no unstimulated or overlooked bugs remain in the design.