After being fabricated, Integrated Circuits (ICs) are tested for proper functional operation using a set of test vectors that contain test inputs and the expected test outputs. Test vectors generated by an Automatic Test Pattern Generation (ATPG) software are applied to the chip or device under test (DUT) by an Automatic Test Equipment (ATE). The ICs that successfully passed all the test vectors are qualified as good whereas the ICs that failed on any of the test vectors are qualified as bad or defective. For the defective ICs, the failure information is collected by the ATE and stored in a buffer for further processing using diagnostics tools.
Defects of a defective chip are located using the failure information obtained from the ATE and the test vectors applied to the defective chip as well as the design information with which the chip is fabricated. Diagnostics tools are used to identify failed patterns by fault selection and perform fault simulation on the failed patterns to identify the location of the defects that best matches the failures observed by the ATE. The matching is typically done using a scoring formula that takes the number of predictions, mis-predictions and non-predictions into account. The diagnostics tools are generally useful in identifying yield limiters and possibly locating them on the defective IC to help chip designers to find the weaknesses in their designs and to improve them by fixing the cause of such failures.
Identifying the location of defects using diagnostics tools becomes more challenging because of the ever increasing complexity of chip design and manufacturing processes as well as the shrinking size of circuits formed on semiconductor chips. In particular, many yield problems associated with nanometer processes are caused by systematic design-process interactions. Recent developments in process technology and complexity in semiconductor chip design outpaced the developments in physical analysis and verification tools, resulting in yield losses caused by systematic defects. In order to accelerate yield ramp, a methodology called volume yield diagnostics (VYD) is used in the diagnostics tools to identify the most critical yield issues related to design processes.
Conventional VYD restricts the amount of time it takes to find the locations of systematic defects. A new diagnostic method proposed by Hora, et al. (“An Effective Diagnosis Method to Support Yield Improvement”, International test Conference, pp. 260-269, 2002) produces a finite list of suspect locations to keep the analysis time low. Hora et al. teach a method, so called Statistical Volume Diagnosis (SVD), incorporating several adjustments over conventional yield diagnostic methods. For example, only a subset of failures observed by the tester is chosen for analysis. In general, SVD selects only the dominantly failed test vectors for each failed die and uses them for diagnostic simulation. This approach helps in identifying the so-called “hot spots” of the design while discarding statistically insignificant fault patterns from further analysis. Hot spots refer to the areas on the failed die where one or more repetitive (or systematic) failures are observed.
SVD reduces runtime for analysis but does not necessarily guarantee high quality of results (QoR), because only dominantly failed test vectors are selected for diagnostic simulation. This approach is based on the generalization that systematic defects occur over a large number of samples and resultantly affect a large number of test vectors. However, some systematic defects may not always affect the majority of failed test vectors or may affect different parts (or hot spots) of a design on different chips. As a result, different sets of test vectors may fail for different chips. If different sets of test vectors fail for different chips of the same device caused by the same systematic defect, it is highly probable that the above SVD schemes would not correctly diagnose the systematic defect.
SVD finds systematic defects of failed dies but has several issues. SVD diagnoses only the most statistically dominant hot spots based on diagnostic scores. Diagnostic scores measure how well the behavior of the modeled defects matches the results measured on a specific failed die. The matching behavior is contained within the patterns that are selected. SVD generates a low diagnostic score for a hot spot in some cases. For instance, a systematic defect affecting 10 locations on a design may appear in different combinations on the IC. Only the patterns impacted by the dominant systematic defect are selected since their diagnostic scores are high, but the patterns that are not impacted by the dominant systematic defect are not selected due to their low scores. In order to locate all the hot spots in the failed dies, it is required to run a CPU-intensive diagnostic simulation. Undesirably, failed dies with random defects as well as the ones with systematic defects may also be included in the analysis, causing unnecessary CPU usage. Failure patterns for random defects are also collected in the database and Pareto analysis is performed thereon, thereby further slowing down the speed of the analysis.
In order to resolve these issues raised with SVD, much research has been conducted by semiconductor and yield management companies. Their research has been focused on the improvement of analysis speed by creating a database for failure modes and efficiently diagnosing failure patterns in comparison with the failure modes on the database to produce faster yet reliable results.
Several approaches have been made to overcome the issues with SYD. Schuermyer et al. (“Identification of Systematic Yield Limiters in Complex ASICs through Volume Structural Fail Data Visualization and Analysis”, Proc IEEE International Test Conference, pp. 137-145, November 2005) proposed a solution to identify yield weaknesses. According to Schuermyer et al., failed test data are transformed from the tester into fail signatures. For a large population of failures, the number of each unique fail signature is counted. Based on key process parameters, the population of failed dies is split into two subgroups, and the relative counts of fail signatures for each group are compared.
Huisman et al. (“Data Mining Integrated Circuits with Fails Commonalities”, Proceedings of the International Test Conference, Oct. 2004, pp. 661-668) described a way to use failed test data from many ICs to determine which ICs failed by similar causes rather than determining the cause of individual failed ICs. They adopted a clustering technique to select the failed ICs caused by similar defects. Bernstein (“Yield Enhancement from Bitmap Analysis”, KLA-Tencor, Company Magazine, pp. 22-24, autumn 1998) proposed an analysis method to understand failures occurring on many different ICs.
The proposed solutions by the above references are pre-diagnostic techniques where the grouping or clustering of ICs is done prior to the analysis to identify systematic defects and to de-emphasize random defects. Grouping or clustering is done on the basis of similar failures with respect to test vectors, scan-flops or primary outputs (POs) that failed on a particular die. Clustering approaches resolve some issues with SVD mentioned earlier. However, SVD by clustering still runs only on failed dies with systematic defects, so failed dies with random defects are not diagnosed. Failure patterns are collected on the database and Pareto analysis is performed on the ‘intelligently’ selected failures, thus the results are obtained relatively faster.
Despite some improvements made over the conventional SVD, there remain issues unresolved by clustering approaches. Clustering approaches fail to detect all the hot spots because hot spots with high diagnostic scores are selected and analyzed. Clustering approaches diagnose only the most dominant hot spots and diagnostics scores may be low even for hot spots. CPU-intensive diagnostics simulation is required to locate all the hot spots, and the database collection and Pareto analysis are also performed on all the defects including the ones with low scores, thereby slowing down the analysis and sometimes causing misleading results.
In addition, there exist architectural issues associated with the clustering approaches. Clustering of the failing dies based on fail signatures is very inefficient because there may be multiple hot spots in the design, and failures can occur in different combinations. In such cases, no matching can be detected across the failing dies. The assumption that random defects occur independently on a small set of failing dies is incorrect because random defects can occur along with other systematic defects. If random defects occur in combination with systematic defects, the matching algorithms proposed by the clustering approaches fail to correctly diagnose such defects. Diagnostics scores may be poor in cases where a failed die has multiple hot spots in it because the diagnostics tools tend to match all the failures identified with the single stuck-at fault simulated. These failures have to be individually diagnosed using more sophisticated algorithms to find the individual hot spots.