A typical problem often encountered in testing and subsequent diagnosis of complex VLSI devices is the availability of an effective and precise diagnostic methodology to pinpoint the root cause of a broad range of modeled and un-modeled faults. The rapid integration growth of these VLSI devices and associated high circuit performance and extremely complex semiconductor processes have intensified old and introduced new types of defects. The defect diversity and subtlety, accompanied by limited fault models, usually results in large and insufficient pattern sets, inadequate diagnostic fail data collection, ineffective diagnostic simulations, all of which result in a poor diagnostic accuracy. The resulting problems generate a growing number of “no root cause fail found”, which typically end in semiconductor failure analysis laboratories.
Identifying faults and pinpointing the root cause of the problem in a large logic structure requires high resolution diagnostic calls for isolating the defects and successfully completing the Physical Failure Analysis (PFA) defect localization that ultimately leads to higher yields. The resolution of state of the art logic diagnostic algorithms and techniques depend on the number of tests and the amount of passing and failing test result data available for each fault. Oftentimes, conventional methods of generating test patterns, collecting associated test results, and utilization of all fail test data in diagnostics simulations, are insufficient to achieve the desired diagnostic resolution.
Referring to FIG. 1, there is shown a diagram illustrating a prior art system for a typical diagnostic process (4) using test patterns (530) applied to a DUT (540) and subsequently feeding the responses into a diagnostic simulation (3) with the resulting fault callout(s) (5) being used for PFA (6).
Test Pattern Generation
Test patterns (530) are needed in manufacturing test to detect defects. Tests can be generated using a variety of methods. A representative model of the defect is typically employed and is referred to as a fault model (510). The fault models are advantageously used to guide the generation and measure the final pattern effectiveness. The stuck-at fault model is the most commonly used model, but other models have been successfully used in industry. For a stuck-at fault model, faults are assigned to the input and outputs of each primitive block for both stuck-at-0 (S-a-0) and stuck-at-1 (S-a-1) conditions. Examples of primitive blocks, i.e., the lowest logical level in any design include AND, OR, NAND, NOR, INV gates, and the like. For each fault, a generator determines the conditions necessary to activate the fault in the logic, based on some logic model (500), allowing the conditions to propagate the fault to an observation point). Tests are generated for each fault in the total set of chip faults and methods are then used to compress these patterns to maximize the number of faults tested per pattern.
In a manufacturing environment, tester time and tester memory are of prime importance; therefore, steps are taken to ensure that the patterns are as efficient as possible by having the test system (1) testing the maximum number of faults per pattern (although they are more difficult to diagnose).
At final test, patterns are applied to the device under test (hereinafter referred to as DUT) (540) and test results data is collected (2). Test results data typically contains both passing and failing patterns and the specific latches or pins (“observation points”) that failed and how they failed. To determine which fault explains the fail, the fail data is loaded into a diagnostic simulator (4). Each fault is analyzed to determine whether it explains the fail or set of fails. Resulting from this simulation (3) is a call-out report that lists each of the suspect faults and a confidence level at which the fault can explain the fail. Callouts (5) can range from precise calls of 100% (i.e., an exact match) to lesser confident numbers. Physical failure analysis (PFA) (6) requires locating the failure at the precise location, and as such, a highly accurate call-out is needed. Often, the resultant diagnostic callout does not give a sufficiently clear indication of the fault location, and may even provide a totally wrong callout. In situations where several faults are identified but none have a precise callout, a finer resolution is required. A focused set of patterns can be created based on a subset of faults called out during diagnostic simulation. In a typical fault simulation (3), the fault is marked as detected once this process has been completed.
Fault Model Models Defects
Physical defects can manifest themselves in many ways and often do not match any fault model (510). By expanding the method in which test patterns are applied, the likelihood of being able to also detect unmodeled faults increases. Conventional methods for generating test patterns, applying them, collecting associated test results, and exercising conventional diagnostics algorithms are insufficient to achieve the desired diagnostic accuracy.
Diagnostic Simulation
Referring now to FIG. 2, a flow chart is shown illustrating a conventional diagnostic methodology typically used in industry, applicable to final test of a VLSI die or multi-chip module, and which is used for determining the root cause of failure(s) and, ultimately, steps for fixing the problem causing the failure.
The chip or module to be tested is described in the form of logic model(s) (500) (see FIG. 1) describing the DUT (540). Examples of such logic models (500) can take the form of a high level representation of the logic such as behaviorals or, at the other end of the spectrum, as a netlist comprising primitives (NOR, NAND, and the like) and their respective interconnects.
A set of test patterns also known as test vectors, is generated using one of several ATPGs (Automatic Test Patterns Generators) (520) which, depending on the size and complexity of the logic, may include one or more deterministic pattern generators, weighted adaptive random pattern generators, pseudo-random pattern generators and the like.
Still referring to FIG. 2, in block 1, it is determined at the completion of the test (i.e., after applying all the test patterns known a priori to detect the presence of any failures), whether the chip or module passes or fails the test. Assuming that the answer is ‘yes’, the DUT is scribed, diced and mounted onto the next level of packaging. Alternatively, if the device under test fails during testing, the corresponding failing data is collected (block 2) and is processed into a set of diagnostic simulation programs (block 3) designed to localize the failure. The diagnostic simulation is performed using the logic model of the DUT and the test patterns that were applied which detected the fault in the DUT. The intent of the diagnostic tool is to determine the fault or set of faults which explain the fail data. The outcome of the diagnostic tool is referred to a fault callout. Typically associated with a fault callout is a measure of how well each fault in the callout explains the occurrence of the physical failure (block 5). This performance measure provides a confidence level. The fault callout is then preferably inputted to a physical failure analysis (PFA) process (block 6), wherein the correlation between logic failures is coupled to actual physical failures. Locating the physical failures makes it possible to determine the root cause of the problem allowing the engineer to take the necessary steps to fix the problem.
Therefore, there is a need in industry for a test methodology that provides extended diagnostic capabilities under diverse environmental test conditions to acquire specific device responses and fully utilizing those responses in a new enhanced diagnostics process which results in a more accurate diagnostic.