The very large scale integrated (VLSI) circuits fabricated today typically contain hundreds of thousands of circuit elements. Testing these complex circuits to isolate faulty circuits from fault-free circuits has become increasingly difficult because of the inaccessibility of internal circuit elements and the elements' interdependencies. Furthermore, as the number of possible test paths through a circuit rises at 2n, where n is the number of circuit elements, efficient testing will continue to increase in difficulty as the density of these circuits continues to grow.
To test a circuit, a set of test vectors, or patterns, must be developed. The circuit is simulated without faults and the test patterns are applied to determine the circuit's expected response to the patterns at its primary outputs. The circuit is then simulated with faults and the test patterns again applied to determine if there is a change in the expected response to any pattern at the outputs. A fault that does not cause a change in the expected (fault free) response is an undetected fault. A test procedure desirably detects a high percentage of faults, and thus should have a minimum of undetected faults.
One common method of developing tests employs external automated test equipment (ATE). In this method, an automatic test pattern generator (ATPG) is used which, given a circuit and fault model, generates a set of test patterns designed to detect close to 100% of the circuit's faults. These deterministic test patterns are then compressed and stored in a tester. During testing, they are decompressed and loaded into the primary inputs of the circuit under test (CUT). Faults are detected by comparing the response of the circuit to the expected response. Although deterministic ATPG can detect close to 100% of faults, it requires enormous resources to generate and store the test patterns required for complex VLSI circuits. Furthermore, interactions between the external tester and elements in the CUT create their own set of potential errors.
To counter these problems, built-in self-test (BIST) methods have been developed that move the test pattern generation and output response analysis from an external source onto the chip itself. The core of the BIST technology is that rather than using a defined set of test patterns specifically defined to detect a known set of faults, a pseudo-random pattern generator (PRPG), generally a linear feedback shift register (LFSR), on the chip itself generates pseudo-random patterns which are then used to detect faults. On-chip output response analysis is typically performed by a multiple-input-shift-register (MISR), a circuit that compacts the output response and generates a signature for comparison with the signature of a fault-free circuit.
Although pseudo-random pattern generation is simple, this method rarely achieves the close-to 100% fault detection achieved by ATPG, as there are almost always faults that require very specific patterns to test; these patterns often take many, many cycles to be automatically generated, thereby elevating the cost of test application and fault simulation beyond acceptable levels.
To tackle the problem of pseudo-random-pattern resistance, many techniques have been proposed, which, generally speaking, can be classified into two categories: changing the attributes of the pseudo-random patterns, or physically modifying the CUT.
The first category consists of techniques for modifying the pseudo-random patterns to provide better fault coverage. Some of these modification methods include reseeding, weighted random testing, and pattern mapping. In reseeding, deterministic test patterns are compressed and encoded as seeds for a PRPG. These seeds then generate test patterns known to find otherwise-undetectable faults. Weighted random testing uses mathematical methods to tweak the pseudo-random patterns in ways that bias them toward detecting random-pattern-resistant faults by assigning weights to the values contained in specific scan cells, biasing their values towards “1” or “0”. Pattern mapping takes the set of patterns generated by the PRPG and transforms them, using on-chip logic, into a new set of deterministic patterns that provides the desired fault coverage. However, these methods are significantly more complicated than simple random pattern generation and either require extra memory to store the seeds or weight sets, or require additional logic, all of which is expensive in terms of area overhead.
Another way of improving random pattern testability is through physical modification of the CUT. Test points, which include observation and control points, are inserted at selected nodes of the CUT. A control point—which forces a specific location in the circuit under test to a particular signal value—an test for known undetected faults at a node (e.g., a logic gate), and can also test for otherwise undetectable faults in the node's fanout cone. A control point is typically inserted by adding logic to the circuit. An observation point allows faults to be tested that do not propagate to a CUT output, but can be observed at a specific location within the CUT logic. Observation points—typically inserted by adding an additional output lead from the node—improve the observability both of the output of an internal node and nodes in its fanin cone.
One such method of selecting and inserting test points, multi-phase test point insertion (MTPI), partitions the testing into multiple phases, in which each phase includes testing for a progressively reduced set of faults. In the first phase, observation points are selected to capture the detectable faults. Within each subsequent phase, a set of control points are selected that, when added to the CUT logic, can find still-undetected faults. While observation points help improve fault coverage, control points can cause complicated changes in the circuit that may not always improve the fault coverage and that also may cause timing degradations due to the additional logic inserted into critical paths of the core logic of the CUT.