The very large scale integrated (VLSI) circuits fabricated today contain hundreds of thousands of circuit elements. Testing these complex circuits to isolate faulty circuits from fault-free circuits has become increasingly difficult because of the inaccessibility of internal circuit elements and the elements' interdependencies. As the density of these circuits continues to grow, the difficulty of testing them for faults will grow even greater. Furthermore, as these integrated circuits are assembled into printed circuit boards (PCBs) and systems containing several PCBs, it is imperative to reuse the testing effort spent at circuit level to be able to contain the very high costs involved in ensuring the quality of PCBs and systems.
To test a circuit, a set of test vectors, or patterns, must be developed. This is typically done by employing an automatic test pattern generator (ATPG) which, given a circuit and fault model, generates test vectors to detect the faults. This process is accelerated by employing a fault simulator. In this fault simulation technique, the circuit is first simulated without any faults and the test patterns applied to determine the circuits' expected response to the patterns at its primary outputs. The circuit is then simulated with faults and the test patterns again applied to determine if there is a change in the expected response to any pattern. A fault that does not cause a change in the expected (fault free) response is an undetected fault. A test procedure must provide a desired fault coverage, which means that it must detect a certain percentage of faults; undetected faults must be few.
Testing of integrated circuits has traditionally been performed by employing an external testing device that applies the test patterns generated by an ATPG to the primary inputs of the circuit under test. The response of the circuit at its primary outputs to the test vectors is then compared in some manner with the expected response to determine if there are faults within the circuit. This method, however, requires enormous resources to generate and store the test patterns required for complex VLSI circuits.
The built-in-self-test (BIST) scheme offers an attractive alternative to conventional methods of testing. With BIST, the test pattern generation and output response analysis are performed by on-chin circuitry. Such a scheme reduces the external tester costs because of the migration of test generation and response analysis functions onto the chip itself. Moreover, the integration of the designed circuitry and testing circuitry on the chip allows for testing the circuit at its normal operating speed, resulting also in the detection of non-modeled faults. The BIST scheme also provides a hierarchical solution that can easily be utilized at circuit board and system levels.
Different BIST schemes vary in the techniques used for on-chip test pattern generation and output response analysis. Random pattern generation, wherein the individual circuit inputs are assigned a 0 or a 1 with equal probability, is a preferred method because of its simplicity. The test patterns may be generated with a linear feedback shift register (LFSR). On-chip output response analysis is typically performed by multiple-input-shift-register (MISR), a circuit that compacts the output response and generates a signature for comparison with the signature of a fault-free circuit.
Although random pattern generation is simple, the presence of random pattern resistance faults in many practical circuits limits its success. Acceptable test quality for these circuits can only be achieved by applying an inordinate number of random patterns. This elevates the cost of test application and fault simulation beyond acceptable levels. A problem then arises as how to achieve the test quality of an ATPG tool while limiting the test application and fault simulation efforts.
One way of improving random pattern testability is through modification of the circuit under test (CUT). Test points, which include observation and control points, are inserted at selected nodes of the CUT. A control point improves the controllability of a node (i.e., the ability to achieve a particular signal value such as 1 or 0) as well as nodes in its fanout cone. It is typically inserted by adding logic to the circuit. An observation point improves the observability of the node (i.e., the ability to observe the output of an internal node) as well as nodes in its fanin cone it is typically inserted by adding an additional output lead from the node. While observation points always improve fault coverage, control points cause complicated changes in the circuit which may not always improve the fault coverage. Several test point insertion procedures have been proposed in the literature. The underlying philosophy of these procedures is to identify, using either exact fault simulation or approximate testability measures, locations in the CUT to introduce control and observation points. Typically, these procedures suggest that control points in the CUT be driven by independent, equi-probable signals. The test patterns are then applied to the CUT in a single session with all control and observation points enabled.
Most practical circuits require a large number of test points to meet the fault coverage requirement, which is specified as the detection of a percentage of possible faults in the circuit. Selection of these large number of test points using repetitive exact fault simulation is not feasible because of the excessive computation requirement. Approximate testability measures, although they reduce time, ignore correlation between nodes in the CUT. This makes such measures incapable of properly capturing the interaction between control points. Of the 2.sup.K possible combinations of values at K equi-probable control points, many combinations may be detrimental because of conflicting values. Since no attempt is made in these measures to consider this destructive correlation during the selection of control points, it is not unusual to obtain reduced fault coverage as increasing numbers of control points are inserted. This results in a divergence of the solution and the selection of futile control points. Furthermore, limiting chip area overhead by sharing of logic driving these control points is not straightforward. Finally, power dissipation during testing tends to be higher than in normal circuit operation because of the presence of the large number of nodes with uniform signal probability.
An objective of the invention, therefore, is to provide a method of identifying test points that achieves higher fault coverage for testing of a circuit with application of a specified number of test patterns. Another objective of the invention is to provide method and apparatus for the built-in self testing of circuits which requires fewer test points to achieve the desired fault coverage.