Wafer inspection systems help a semiconductor manufacturer increase and maintain integrated circuit (IC) chip yields by detecting defects that occur during the manufacturing process. One purpose of inspection systems is to monitor whether a manufacturing process meets specifications. The inspection system indicates the problem and/or the source of the problem if the manufacturing process is outside the scope of established norms, which the semiconductor manufacturer can then address.
Evolution of the semiconductor manufacturing industry is placing ever greater demands on yield management and, in particular, on metrology and inspection systems. Critical dimensions are shrinking while wafer size is increasing. Economics is driving the industry to decrease the time for achieving high-yield, high-value production. Thus, minimizing the total time from detecting a yield problem to fixing it determines the return-on-investment for the semiconductor manufacturer.
The process of inspecting semiconductor wafers to detect defects is important to semiconductor manufacturers. Defects cause wafer yields to decline, causing increases in overall semiconductor manufacturing costs. These cost inflations are eventually passed down to the consumer who has to pay a higher price for all products having electronic components, from phones to automobiles. Inspection tools with their automated detection of defects on wafers provide semiconductor manufacturers the ability to automatically detect defects. Thereafter, these defects can be eliminated by the manufacturer by changing one or more of their designs or processes.
The process of reviewing semiconductor defects in a semiconductor fabrication facility or foundry is expensive in terms of both human effort and time. The more efficient the review sampling, the lower the operational costs will be. Review tools can produce resolved and high magnification images for a user, but review tools tend to be slow. The user has to review the defects in these images for appropriate countermeasures to be taken. To do this, the user classifies the defects into types that can pinpoint the cause of the defect. This classification process requires extensive human effort and is slow, which results in high operational costs. Review would be improved if every defect (cause) type was sampled in one sampling without missing defects.
After an inspection process, the local designs at the defect locations (returned by inspection) are grouped by a method called Design Based Grouping (DBG). Depending upon the statistics of these groups, one or more locations from each group are chosen for review sampling. DBG employs an encoding scheme for each corner and surrounding geometry. This information is used to quickly find all patterns that exactly match a given pattern. This means that two locations are in a DBG group if their designs match exactly, and the design patterns of two locations that look similar but are numerically different even to a small extent fall into different groups.
By the quantitative nature of its language and coding mechanism, users have difficulty writing “rules” with DBG. This capability can be important because a user knows from experience the vulnerable locations on the layer that are likely to cause defects. The defects arising at these locations can be rare but disastrous, and DBG based review sampling that relies solely on statistics may miss this tiny population if limits are set on the sample size. The DBG methodology also does not explicitly indicate the design violations and weaknesses that cause systematic defects. It only does so by examples of patterns based on defect frequencies of those pattern encodings.
DBG grouping is carried out by performing exact matching, which means that two locations are in a DBG group if their designs match exactly, and the design patterns of two locations that look similar but are numerically different fall into different groups. This can create too many groups. Hence, a sampling which relies on DBG groups may miss an important defect type, or it may sample too many of the same type. These DBG deficiencies may not result in efficient review sampling.
Besides DBG, a language used in Design Rule Check(s) (DRC) or Mask Rule Check(s) (MRC) can determine whether the physical layout of a particular chip layout satisfies a series of recommended parameters called Design Rules. DRC is an application of Standard Verification Rule Format (SVRF) scripting. DRC rules are a set of specific rules (written using SVRF) that are dimension driven. A purpose of DRC is to ensure the adherence of design to rules for the ease of manufacturability.
For most Computer-Aided Design (CAD) tools, the SVRF is a common scripting language that is used for design/polygon manipulation. For example, SVRF can be used to manipulate polygons and find physical layout properties (e.g., minimum spaces, minimum widths) in the semiconductor-design layout (e.g., a physical design). This language requires an exact knowledge of distances between shape primitives and exact spatial relationship in order to define a pattern of interest (POI). Therefore, there can be thousands of such complex rules that are checked for violations when the design is laid out to decide which parts need Optical Proximity Correction (OPC). However, there are problems with using a language such as SVRF.
Very-large-scale integration (VLSI) chip manufacturing is done in a sub-diffraction regime. As a result, it may need to heavily compensate for proximity effects arising out of diffraction from complex 2-D like gratings (e.g., photo masks). In spite of the nature of this domain, most of the rule checks done on the semiconductor design files are dimensional checks (e.g., space, width, coverage etc.). At various stages of the manufacturing flow, from DRC to MRC, there are not rules that check relational parameters of a design that may be heavily impacted by proximity effects.
In electronic design automation, DRC can be used to determine whether the physical layout of a particular chip satisfies a series of recommended parameters called design rules. DRC can be done by writing out a program using the SVRF language. However, just like one programming language is better suited for certain programming tasks than others, SVRF-DRC is not suited for a fuzzy rule search engine application because SVRF-DRC is a scientifically crisp, dimension driven language. Because DRC is a crisp decision making process, the SVRF scripting language supports this crispness. The crisp language of SVRF-DRC does not account for any unforeseen changes and/or impediments.
In SVRF, a physical design layout is polygon information in the form of coordinates. SVRF is optimized to decipher information like edges, spaces, width, or area based on the coordinate information. SVRF then uses this information to perform more complicated operations on the layout. One polygon attribute that SVRF uses is an edge. SVRF constructs all other polygon properties like line-ends, convex corners, or concave corners based on the edges. The process of constructing other polygon attributes using SVRF dilutes the purity of the rule by introducing false positives. Modifying SVRF to give a range of dimensions will increase the possibility of catching false positives because of innumerable, similar polygon combinations present in the layout.
The SVRF language also does not provide the user with a UI to quickly master the language. Thus, it forces users to spend considerable amount of time, effort, and resources to learn this new scripting language. The concept of similar kinds of patterns is difficult to specify using a scripting language like SVRF. The violation of traditional dimensionality checks in a DRC using a rule table written in SVRF or some other similar language is not an accurate pointer to process or manufacturing flaws.
Therefore, what is needed are improved systems and techniques for review sample of defects.