The cost of correcting a design error within an electronic system increases radically as each step in a design process passes and the likelihood of an error going without being discovered grows. Thus, if a designer finds an error during the design phase of a logical block, the error costs much less to correct than if the designer finds the same error during final verification before tape-out. But correcting an error found at final verification before tape-out costs less than correcting an error found in physical validation, which in turn costs less than correcting an error discovered in a customer-ready application.
A technology business' success depends on the design process yielding functionally correct hardware efficiently and at the lowest possible cost. Since design verification must determine whether a proposed circuit design functions according to a specification, extensive analysis and verification are required to ensure compliance. To ensure the proper operation of both the particular circuit and the system as a whole, the circuit must be verified using various simulation tools to ensure that it successfully meets a specification. Since current design-verification tools cannot perform detailed evaluation of entire circuits—especially large circuits—quickly or efficiently enough to be viable, circuit designers have employed an approach to verification that maximizes coverage. For example, traditional verification methods are based on test benches. Test benches function as a verification technique by applying sequences of test vectors to circuit inputs and monitoring the results at the same circuit's outputs. If the outputs produce expected values, the circuit is thought to function properly. However, the increased test duration and difficulty inherent in verifying complex modern circuit designs renders the test-bench approach less efficient at finding design errors. Further, current methods do not scale to large electronic systems comprising perhaps hundreds of millions of devices.
Modern circuit designers have turned to the technique of constrained random verification to gauge the functionality of complex modern circuit arrangements. The test bench generates a random (or pseudo-random) set of test vectors, which may represent a much wider range of values than possible using the fixed sequences previously employed. However, such arbitrary, random values may not properly stimulate a design in situ. Thus, the random values must be constrained to encapsulate the context of the system or subsystem being tested. The constraints may be hard constraints or soft constraints. A hard constraint is a constraint that must always be met, while a soft constraint should be met if possible. Due to the complexity of modern electronic systems, there is a possibility that constraints may be contradictory or otherwise incompatible, either due to mistakes by the individual or individuals defining the constraints, or through difficult-to-understand interactions between the constraints.