Optical metrology techniques generally referred to as scatterometry offer the potential to characterize a workpiece (i.e., a sample) during a manufacturing process. In practice, light is directed onto a periodic grating formed in a workpiece and spectra of reflected light are measured and analyzed to characterize the grating. Characterization parameters may include critical dimensions (CDs), sidewall angles (SWAs) and heights (HTs) of gratings, material dispersion parameters, layer thicknesses, angle of incidence of light directed onto the diffracting structure, calibration parameters of an optical measurement system, etc., which affect the polarization and intensity of the light reflected from or transmitted through a material.
Characterization of the grating may thereby characterize the workpiece as well as manufacturing process employed in the formation of the grating and the workpiece. For example, the optical metrology system 100 depicted in FIG. 1 can be used to determine the profile of a grating 102 formed on a semiconductor wafer 104. The grating 102 can be formed in test areas on the wafer 104, such as adjacent to a device formed on the wafer 104. The optical metrology system 100 can include a photometric device with a source 106 and a detector 112. The grating 102 is illuminated by an incident beam 108 from source 106. In the present exemplary embodiment, the incident beam 108 is directed onto the grating 102 at an angle of incidence θi with respect to normal of the grating 102 and an azimuth angle φ (e.g., the angle between the plane of incidence of beam 108 and the direction of the periodicity of the grating 102). A diffracted beam 110 leaves at an angle of θd with respect to normal and is received by the detector 112. The detector 112 converts the diffracted beam 110 into a measured metrology signal. To determine the profile of the grating 102, the optical metrology system 100 includes a processing module 114 configured to receive the measured metrology signal and analyze the measured metrology signal.
Analysis of measured spectra generally involves comparing the measured sample spectra to simulated spectra to deduce a scatterometry model's parameter values that best describe the measured sample. As used herein, “model” refers to a scatterometry model and “parameter” refers to a model parameter of the scatterometry model unless otherwise specified. A model can include a film model, optical CD model, composition model, overlay model, or any other optical model or combination thereof.
Existing methods of determining which parameters to include or exclude in a model require a user (e.g., an engineer performing the regression analysis) to determine model fit metrics at one or more data points, subjectively analyze the model fit metrics, and revise the model based on the user's subjective assessment. For example, the user would document chi-square of the one or more data points, make a subjective determination regarding whether chi-square was sufficiently low, then revise the model if chi-square was not sufficiently low. This process of discovery typically involves determining: 1) the materials in the structure that the optical constants should be adjusted for; (2) for which materials in the layer the optical constants are varying; (3) which dimensional parameters are changing across the wafer; (4) and for which model parameters the metrology tool provides sufficient sensitivity and minimal parameter correlation, therefore justifying floating them in the model. The user could also review trends in model parameters across the wafer, and compare them against expected within-wafer variations based on known process signatures from, for example, etch or deposition tools. Such subjective determinations are typically based on previous experience with other projects and models, and qualitative assessments of whether the model fit to the data appears to be subjectively good enough. Thus, the existing process of optimizing the model cannot easily be automated, leading to greater cost, time, and inconsistencies in model generation and evaluation of the diffracting structure.
One technique of model optimization is disclosed by U.S. Pat. No. 8,090,558, “Method for optical parametric model optimization,” which is incorporated by reference for all purposes. U.S. Pat. No. 8,090,558 describes a method for determining which parameters are to be floated, set, or discarded from the model, which includes determining whether average chi-square and chi-square uniformity decrease or increase when a parameter is added to the model. While this method provides for a more systematic approach and demonstrates more efficient model optimization than previous techniques, techniques such as this do not focus on guidelines regarding how large an average chi-square improvement, or how low a value of chi-square uniformity, is necessary to justify a model change. The decision of which parameters to include or remove and when to terminate the optimization is left to the subjective judgment of the data analyst. In addition to being based on subjective judgment, such existing techniques do not guard against noise parameters (i.e., insignificant parameters) from being added to the model.
Some existing methods allow a user to test the addition of a single parameter to the model, but are inadequate for testing repeated model changes with the same set of data. This is the so-called “multiple comparisons problem.” Statistical significance tests, such as the F-test, were not designed for repeated model changes, and can result in false assessments of the significance level. Various modifications have been proposed for dealing with the multiple comparisons problem (e.g., the “Bonferroni correction” & “Family Wise Error Rate”), but are less than ideal. The Bonferroni correction is generally regarded as being too conservative, and may reject some parameters that should be included in the model. Family Wise Error Rate methods were developed for conducting just a few tests, and not a large number of tests.
Other existing methods seek to control the probability over a number of significance tests that a noise parameter has been added rather than seeking to ensure that no noise parameters are added to a model. Such methods can be more appropriate in fields with large scale testing for which the number of potential model parameters that could be tested is large, but less appropriate for applications with fewer model parameters in which it may be desirable to prevent noise parameters from being added to the model.
Thus, existing methods are subjective and therefore require large amounts of user input, and fail to guard against insignificant parameters from being added to a model. Existing methods also fail to provide objective and effective methods for determining when to terminate significance testing for scatterometry model optimization.