Several trends have adversely affected the task of testing populated printed circuit boards that have made the diagnostics of failing components both difficult and costly.
First, the density of integrated circuit components and of electronic packages continues to increase. Designers at &he chip level do not always concern themselves with designing a chip so it can be easily tested a& either the chip level or in a populated card or board. As a consequence, chips, multi-chip modules, and cards oftentimes lack the necessary facilities that would enable a board to be tested in an economical and efficient manner.
Second, the increased proliferation of surface-mount packages with limited access to internal interconnections has forced the development of costly fixtures that accommodate probing of closely spaced input/output pins, i.e., (I/O's). This often leads to the use of functional patterns as an alternative, which is not always thorough or desirable.
In accordance with one prior art approach, the testing of a board comprised of a plurality of electronic components of diverse nature and origin requires the testing of each component separately and then mounting each component on the board. The board is then tested as a single entity, applying to it a set of test vectors generated to detect a substantial percentage of possible failures within the board. This technique is referred by those versed in this art as "Through-the-Pins-Testing". As its name implies, testing is achieved through the board I/O's which provide the necessary means of communication with the outside world. As the name of this technique also implies, the board is viewed as a single entity to be tested in its entirety at one time, and not as the concatenation of a plurality of independent components.
"Through-the-Pins-Testing" has the disadvantage or requiring the handling of a very large logic model which is necessary to generate the aforementioned set of test vectors. A large logic model requires a large mainframe and its processing during test generation and fault isolation consume much CPU time. Furthermore, 100% test coverage cannot always be guaranteed even for a structured logic design. This method also requires an expensive computer driven automatic tester with at least as many channels as board I/O's. The availability of such a tester is not always easily attainable, thus making this testing technique oftentimes impractical.
Schnurmann, in U.S. Pat. No. 4,348,759, entitled: "Automatic Testing or Complex Semiconductors on Testers with Insufficient Channels" proposes using multiplexors to group inputs of equal electrical characteristics and apply test patterns to the several groups of primary inputs through the multiplexors. Responses are observed at the output channels as it is ordinarily done on any tester. This method is limited if the number of grouped inputs and individual outputs exceed the number of available tester channels.
Another testing technique that is widely used is known as "Chip-in-Place Test". This technique requires an array of precisely positioned exposed contact pads for each chip contained and interconnected in the high circuit density packaging structure. This array of contact pads, referred to "EC pads" (for Engineering Change Pads) is utilized by a mechanical test probe head in the testing of each individual chip subsequent to interconnection of the chip in the high circuit density packaging structure, such as a multi-chip module, card, board, etc. This method has the disadvantage of requiring the alignment and subsequent stepping of the probe over the surface of the package, a time consuming process. Moreover, since the probe head contacts one chip site at a time, the connections between the chips on the package are not tested.
Testing of complex semiconductor packages can also be achieved by another method called ECIPT, Electronic-Chip-in-Place-Testing, as described in U.S. Pat. Nos. 4,494,066 and 4,441,075. ECIPT provides for a design approach and testing method which allows for the testing or each individual chip or a plurality of interconnected chips through the module pins without physically disconnecting the chip under test. This methodology calls for loading into a set of master-slave latch pairs (L1/L2) attached to each I/0 of every chip appropriate binary values which will in turn control all the off-chip drivers of all other chips on the unit not under test, and in this manner allow for the chip under consideration to be tested as if it were alone on the module. Thus, by electrically isolating the chip under test from all others, it becomes now possible to apply to it all the test vectors originally generated for the chip under test at water testing time.
This method of testing has the disadvantage of requiring the handling of very large volumes of test data which requires a sophisticated data management system and usually a large mainframe. The testing process is, in addition, expensive and time consuming. Moreover, it imposes on the designer certain constraints and limitations prompted by the requirement or a pair of master-slave latches attached to every I/0 of every chip on the module.
Lately, work performed by JTAG (Joint Test Action Group) has led to the development of a technique called "Boundary-Scan" which is described in two papers entitled: "Boundary-Scan--A Framework for Structured Design-for-Test", by Maunder and Beenker, and "Testing a Board with Boundary Scan" by van de Lagemaat and Bleeker, both in the Proceedings of the 1987 International Test Conference, September 1987, pp. 714-723, and pp. 724-729, respectively.
The boundary-scan technique involves the inclusion of a shift-register latch (contained in a boundary-scan cell) adjacent to each functional component pin. This allows the signals at component boundaries to be controlled and observed using scan testing techniques.
Proponents of this technique acknowledge that a majority of boards are not designed exclusively with in-house custom parts, and that semi-custom and merchant chip vendors fail to incorporate standard boundary-scan designs in their products. Consequently, this technique cannot be accepted as a universal solution to testing a fully populated board.
Testing of components prior to assembly on a board does not necessarily guarantee that the board will function properly :n the system environment in which the board is designed to operate. Neither does it ensure that latent component failures may not eventually appear and disrupt the operation of the system. Clearly, at this point new techniques are required to: (1) isolate failures and (2) identify and replace the failing components. To reapply once again the sequence of test vectors, whether deterministic or functional, that were previously used at testing time does not always guarantee success. Appropriate diagnostic techniques are required to automatically detect, isolate, and repair any failures that may occur prior to system delivery or in the field.