1. Field of the Invention
This invention relates to a method of designing and testing electronic assemblies; and more specifically a method of designing electronic components for use in electronic assemblies and a method of testing such components using a component by component testing technique.
2. Description of the Prior Art
Current methods for testing electronic equipment include various methods which individually test each printed circuit board used in the assembled equipment.
Board level automatic test equipment (ATE) intended for general purpose application utilizes either of (or a combination of) two approaches: in-circuit test (ICT) or functional board test (FBT). Both techniques have deeply rooted problems which prevent their conceptual ideals from being fulfilled. In what might be considered tacit agreement with this statement, a serial shift path is included in some designs to reduce the board level test problem to one of more reasonable proportions. However, this technique fails to address certain fault categories and introduces new problems which have yet to be solved.
ICT is an attempt to test individual components of an assembly one-by-one, by providing stimulus directly to the device singled out for test. Instead of using a card-edge connector, an in-circuit test is usually administered by mounting the printed circuit board in a multiple-pin (bed-of-nails) fixture. The fixture pins, which are usually brought into contact with test points (nodes) on the board by vacuum actuation, are configured so as to contact every node on the circuit board. A different test fixture is fabricated for each circuit board type being tested so that the pins line up with the nodes. Test equipment limitations usually dictate reliance upon etch of the assembly being tested to complete the connection on all but the smaller assemblies. While means exist to verify both contact between the board being tested and the individual pins (probes) of the bed-of-nails, and the integrity of board etch, these problems result in decreased throughput and less accuracy of diagnosis.
Providing test stimulus for digital devices requires overdriving the outputs of devices of the assembly that control the target device (i.e., component to be tested) during functional operation of the unit. While the possibility of damaging these other devices, by forcing them to an opposite state, has been empirically shown to be of little current practical significance, this problem will continue to exist, and may even become insurmountable at some point in the evolution of integrated circuits. In many cases, the overdrive capability of the tester is inadequate to deal with particular devices, requiring that the forcing be accomplished at a previous level of logic (i.e., earlier in the circuit paths). Such fixes interfere with diagnostic accuracy, typically being beyond the scope of the tester software (i.e., the program that controls the execution of the ATE tester sequence) to fully, or even largely, integrate. The advent of Advanced Schottky devices, such as the Texas Instruments Incorporated "AS Series", produce an even greater demand on tester hardware and software.
Driver current cannot be increased at the expense of slew rate (i.e., rate of change of voltage), however, since device operation is often dependent on some minimum risetime. More current switching in a shorter time produces increased noise to further complicate tester design goals. The inability to prevent spikes when overdriven circuits attempt to change states, as an indirect result of stimulus to the target device, often requires that other devices be preconditioned to prevent such feedback. Since the algorithms to accomplish this guarding (i.e., precondition to prevent feedback) must deal with device functionality, the tester software must increase in capability at a rate coupled with the change of device complexity. As fewer small scale integrated (SSI) circuits or medium scale integrated (MSI) circuits devices are used, not only will tester software have to be exceedingly complex to identify these feedback loops, but it will often be unable to find a point at which to inject the guarding stimulus.
The drivers to provide the needed stimulus over a variety of integrated circuit logic families are necessarily expensive. Individual driver cost is a major issue where the need for more than a thousand drivers per tester is not uncommon.
ICT stimulus problems notwithstanding, there is no guarantee that the inability of the target device to produce a correct level is caused by an internal fault. Wired-or's, marginal shorts, or loading by other devices are possibilities which require further analysis merely to be discounted. While the problems of developing techniques to deal with these situations do not seem beyond solution, the cure is already far behind the need. Furthermore, the use of devices having connections accessible only on the side of the printed circuit board contacting the bed-of-nails, will likely tax a solution applicable to devices packaged in dual-in-line-packages (DIP's).
In-circuit testing, then, must deal with a variety of problems not fully appreciable when the possible ability to test a single device at a time seems the central issue. The ICT problems may be summarized as follows:
(1) Overdriving requirements. PA1 (2) Possible device damage. PA1 (3) Necessity to guard. PA1 (4) Bed-of-nail contact. PA1 (5) Reliance on etch. PA1 (6) Intra-node diagnosis. PA1 (7) Driver cost. PA1 (1) Repeatability not easily attainable. PA1 (2) Long development time. PA1 (3) Over-reliance on design for testability. PA1 (4) Diagnostic quality indeterminate. PA1 (5) Sensitivity to design changes. PA1 (6) Inability to deal with analog circuitry. PA1 (7) Mutually exclusive features.
The functional board test approach is an attempt to provide stimulus and check responses at the external connections of an assembly, usually at the board's edge connections, in much the same fashion as the unit would function in a system environment. To predict the state of external connections, for error detection; and internal points, for fault diagnosis, requires extensive tester software. While the alternative of eliminating this software and learning the responses has been used in some FBT efforts, the disadvantages of doing so outweigh the cost advantage immediately gained in most cases.
If it were true that an assembly, correctly designed from a utilization standpoint, would always respond in the same manner to given stimulus, the only problems to be reckoned with using this approach would involve timing repeatability from one test to another or from one tester to another. However, it is generally incumbent upon the hardware designer only that all such assemblies respond to user stimulus in the same user-visible manner. This requires that a complex board to be tested with an FBT tester be designed for repeatability rather than merely for functionality.
The degree of repeatability necessary depends upon the resolution of the tester. Currently, tester vendors tout nanosecond capabilities, but these figures apply only to hardware control which is not fully integrated into the tester software. This degree of precision, however, would have to be supported by something even more complex than the present stored-pattern concept. Even without such resolution, differences found between a sample board and simulator generated patterns may require manual masking of the response to be checked for at a particular point. Such masking obviously degrates the diagnostic process, adding to the number of cases where a problem may be detected but escapes diagnosis, while often involving repeated lengthy attempts at isolation.
The quality of an FBT program to efficiently resolve faults correctly--as opposed to getting lost or requiring scores of probes on even a small board--is difficult to determine. While it would seem likely that the probing algorithm could be applied as an option in faults simulation, such a feature has not been noted in FBT primary vendor literature, if indeed it exists at all. However, considering that it may take several months to generate FBT patterns with sufficient comprehensiveness of detection, and that solving the diagnostic problem could greatly extend the time, it is not necessarily in the best interest of the tester vendor to provide even more hurdles for the tester programmer. Meanwhile, however, higher levels of integration make mass part changes less acceptable when the test system fails.
Long tester program development times cannot be said to be reduced by automatic test vector generators, as they are characteristically ineffective on complex boards. A simple logic change may produce nearly catastrophic results on a test program even during this long manual development state. The reliance upon product stability means that FBT cannot be depended upon as a predictable fault elimination mechanism throughout a typical product life cycle.
Currently, users are satisfied with comprehensiveness figures measured in terms of "stuck-at" faults (i.e., a fault that causes a point to remain at logic 0 or 1 throughout the test sequence). Exact definitions vary from vendor to vendor. Dynamic faults simulation is desirable, of course, but the tester software problems are probably insurmountable. As it is, one major vendor estimated the time for faults simulation of a 7000 gate equivalent device exercised by 4000 vectors to consume sixteen hours of CPU time. While those involved with memory testing stress pattern sensitivity checks, and while logic becomes more and more dense, the stuck-at evaluations become less and less meaningful.
While a number of hardware additions have been made to offset tester software inadequacies, especially in dealing with analog circuits, it is often found that features cannot be used together. For example, fault diagnosis involving current tracing to determine whether the error is attributable to a defect in the source driver or one of its loads may not be available for use when the tester is applying patterns at fast rates.
Major unresolved problem areas in the FBT approach are:
An article entitled, "In-Circuit Testing Comes of Age" by Douglas W. Raymond, which compares in-circuit testing (ICT) with functional board testing (FBT) can be found in the August 1981 issue of Computer Design on pages 117-124, and is incorporated herein by reference.
As an alternative to the above mentioned ICT and FBT approaches, which are generally applicable to both digital and analog electronic circuits, there exists the notable technique of connecting storage elements of a unit in such a manner as to provide a means of determining the state of each element using a simple algorithm. Using this method which is applicable only to digital circuits, a test system may then be considered to have visibility to each element so connected, with the result of effectively reducing the test problem potentially to one of having to deal only with non-sequential logic (in a system where visibility is provided to all storage elements). Perhaps the most significant implementations of this approach are Non-Functional Test (NFT) and Level Sensitive Scan Design (LSSD), in which storage elements (e.g., flip-flops) are generally connected in a serial shift path in addition to the combinatorial connects which determine the units functionality. This serial shift path is provided for testing purposes by an alternate path being enabled in the test mode and with the value in one storage element being clocked to the next storage element under the control of a test clocking signal.
While implementing the serial string may consume a good deal of the space which would otherwise be available for functional purposes (perhaps one-fifth of the logic), a compelling feature of this approach is that the hardware designer may proceed without having to consider miscellaneous testability issues. Another compelling feature of this approach is the ability to provide a means of system verification in a field setting with a very high degree of stuck-at comprehensiveness.
While being a major step forward in many respects, however, the auxiliary connection of storage elements falls short of being a long-term solution to the test problem. Regardless of the extent to which static problems may be detected, the need for dynamic verification and fault isolation programs must still be addressed. In fact, basic isolation techniques of stuck-at faults are still in the development stage although the design philosophy has been widely known for more than a decade. If both the necessity and feasibility of isolating dynamic faults to the component level by some other means is assumed, the usefulness of this approach in a board level test is greatly diminished. A similar argument would apply to isolating failing boards or subassemblies in the field.
Even statically, this method is cumbersome to implement on addressable memory elements, areas of asynchronous logic, and analog circuits. The latter deficiency obviously limits this test approach to but a segment of the electronics industry.
Perhaps one of the most significant long-term disadvantages to utilizing this approach would be the inability to prevent the duplication of a design that might otherwise be considered proprietary. That is to say, the logic within a custom IC could be deduced by means of the same serial shift path that reduces testability to a combinatorial problem.
Problems with utilizing the serial shift path approach are:
(1) Inapplicability to analog circuits. PA0 (2) Problem with asynchronous elements. PA0 (3) Problem in dealing with addressable memories. PA0 (4) Isolation methods inadequate. PA0 (5) Not applicable to dynamic testing. PA0 (6) Inability to maintain security of design. PA0 (7) Large real estate requirement.
In summary, reliance upon the principal test methods currently available to provide for the future needs of the electronics industry as a whole, seems injudicious. Each method creates new problems while justifying its existence as a solution to problems encountered with other test methods. Rather than allow concentration on the development of more precise exponents of a particular method, these approaches each demand significant ongoing efforts merely to provide patches for their characteristic flaws.
Further, the development of any new testing technique that involves changes to components used in electronic assemblies should be done in a manner that permits the newly designed components to be substituted for existing components with minimum or no changes to existing printed circuit boards or electronic assemblies.