The present disclosure relates to verification techniques for integrated circuit logic design and more particularly to identification of unobservability causality in logic optimization flows.
Redundancy in logic designs is undesirable, entailing unnecessary overhead in area and power. Types of logic redundancy include: logic which fall out of the COI (cone-of-influence) of any primary output or otherwise is “unobservable”, logical gates which are functionally equivalent to constant 0/1, logical gates which are functionally equivalent to each other, and logical gates which become unobservable after other simplifications. It is good practice to uncover as much redundancy as possible in the early phase of design cycle, which can be eliminated to save overall budget (area, power, timing, etc.) of the chip. This is particularly essential in a design reuse framework, where logic components may be reused in multiple chips, possibly under different reconfigurations or input/output conditions. Additionally, it can be valuable in high-level design exploration, where one may wish to explore area tradeoffs when reconfiguring a design component in various ways.
To identify redundancy, there exist numerous methods. Combinational optimizations refer to those that do not leverage any sequential design information, e.g., unreachable states: they only affect combinational logic assuming that any state is reachable. Sequential optimizations may alter sequential logic (e.g., latches), and may leverage sequential design information such as unreachable states to yield greater reductions than those possible using combinational analysis alone.
A sequential redundancy identification flow has been proposed that works in an assume-then-prove paradigm. Such a flow will first guess the redundancy candidates, next select a representative register/gate, and then create a speculatively-reduced netlist by merging fan-out references of every redundancy candidate with a reference to its representative candidate. To ensure the validity of the guessed redundancy, a miter (i.e., a proof obligation, in the form of an exclusive OR gate) is added between each candidate and its representative. Finally, each miter is proven as valid, or disproven as invalid in which case the speculative-reduced netlist must be refined due to error in the guessed redundancy. Combinational satisfiability (SAT) sweeping detects and merges nodes that are equivalent in a combinational network. This work offers a subset of the reductions possible with the sequential redundancy identification technique. Observability Don't Care (ODC) based SAT sweeping is a technique where it is possible to merge two gates that are not strictly functionally equivalent, if whenever those two gates are inequivalent, their inequivalence can be demonstrated to not be observable at some fan-out boundary. Such reductions are a superset of the so-called Cone of Influence reduction, where logic outright falls out of the structural fan-in of any design outputs. Another method using sequential optimization based on eliminating inductively unobservable state variables has also been proposed.
While all design optimization techniques have the aforementioned benefits in reducing design area, power consumption, etc.—the use of sequential reductions for design optimization in an automated flow are challenging in practice. For example, in certain design redundancy is deliberate and desired, e.g., to facilitate late design changes (Engineering Change Orders) or runtime-configurable design resilience functionality in case of silicon failures, or to reduce circuit propagation delays. Additionally, most design methodologies require the use of combinational equivalence checking on the pre- vs. post-synthesis models, which may be violated if sequential optimizations are performed. Additionally, the design latches are often relevant to initialization logic, post-Silicon analysis logic, Design for Test functionality, etc. Hence, most design methodologies limit automated optimization techniques to be combinational. Nonetheless, sequential optimizations are more powerful than combinational. In the design optimization realm, such sequential transforms may often be performed manually, ideally using automatically generated optimization reports to guide the designer in their Hardware Description Language (HDL) editing. In such flows, it is desirable to apply these algorithms as early as possible in the design cycle to provide feedback to the designers for manual HDL optimization. Additionally, such transformations may be used to reduce verification runtime, since verification complexity is generally exponential in design size.
While numerous transformations are available for logic optimization, what is lacking is a solution that can identify the cause of why a particular optimization was possible. Most notably, it is generally the case that performing one optimization enables another, that may not have been possible without the former optimization. For example, merging a pair of functionally equivalent gates may cause other gates to become unobservable. It is also noteworthy that, for efficiency, it is often the case that a large number of optimization steps are performed concurrently within a logic optimization flow; performing one optimization at a time would not only be prohibitive from a scalability perspective (many vs. one logic optimization run), but also since limiting each run to a single optimization step would likely become prohibitively expensive, e.g., in a speculative-reduction based sequential redundancy removal flow, the scalability benefit enabled by speculative reduction of a single redundancy is significantly less than the scalability benefit enabled by speculative reduction of many redundancies.