Integrated circuit fabrication relies heavily on frequent and consistent inspection of the structures formed at various stages during the fabrication process. Some of the inspections can be electronic or chemical in nature, but a great many of the inspections that are performed are optical in nature. In other words, the substrates or semiconductor wafers on which the integrated circuits are formed are inspected by collecting electromagnetic radiation such as light received from the substrate, whether that light be reflected from or transmitted through the substrate, and inspecting the properties of the collected light.
Most optical inspections in this day are accomplished by digitizing the collected light and then analyzing the digitized images with sophisticated computerized analytical routines, which compare the images to one or more of a variety of baseline or standardized references, and then detect differences between the captured images and the references. The routine then further attempts to identify the nature of any differences so detected. This general process is typically very helpful to process engineers and others who are responsible for monitoring and improving both the integrated circuits so formed and the processes by which they are fabricated.
However, there are several inherent difficulties in such an optical inspection process. One general class of issues deals with the issue of how closely to inspect the substrate. For example, optics having greater magnification will detect smaller and smaller flaws. Light of smaller wavelength will also detect smaller flaws in the substrate. A sensor having a higher resolution, such as a charge couple device having a greater number of and smaller-sized pixels, can also detect smaller flaws. Further, software routines can be set using a variety of different parameters to, at one end of the spectrum, flag every difference between a substrate image and a reference as a defect, and at the other end of the spectrum, ignore all but the very largest of differences. Thus, some degree of tuning or optimization of the inspection equipment is typically performed.
Typically, users optimize inspection recipes in a very laborious trial and error procedure. Starting with any desired recipe, such as a set of threshold parameters that control the sensitivity of an inspection scan, the user runs the inspection with the recipe and then reviews the defects caught by the inspection. If the inspection doesn't catch enough defects of interest, the user lowers or otherwise adjusts one or more threshold parameter. On the other hand, if the recipe caught too many anomalies which were not of interest, the user increases or otherwise adjusts one or more threshold. The user then rescans the substrate with the modified recipe and reviews the inspection result again. The user repeats these three steps of tweaking parameters, rescanning the substrate, and reviewing the result, until he arrives at an acceptable set of threshold values.
The inefficiency of this old method becomes much worse for more advanced defect detection algorithms, such as the segmented auto threshold algorithm of the bright field machines made by KLA-Tencor Technologies Corporation of Santa Clara, Calif., that require many threshold parameters, and the number of iterations is multiplied by the number of threshold parameters.
The first problem with the old methods is the long setup time. Both scanning the substrate and reviewing the results can take a long time and lots of effort. The second problem is that the resulting recipes are usually far from optimal. One reason for this is that when using this manual optimization method, the machine provides very little if any information about the defects, and the user doesn't know how or how much more the recipe can be improved. As a result, the user essentially has to count on his intuition or experience in adjusting the parameters, and usually settles on a set of values that are far from optimal. In addition, because the manual method is so time consuming, many users get to a certain level of optimization, and then just quit.
What is needed, therefore, is a system by which parameter threshold optimization can be more easily accomplished and thereby produce better results.