1. Field of the Invention
The present invention relates to a defect inspecting method and an apparatus that detect a difference between corresponding signals, that compare the detected difference with a threshold, and that, if the difference is larger than the threshold, judge that one of the signals is defective. More particularly, the present invention is concerned with a defect inspecting method and an apparatus that detect a difference in a gray-scale level of one of corresponding portions of two images derived from two objects, that compare the detected difference in a gray-scale level with a threshold, and that if the difference in a gray-scale level is larger than the threshold, judge that one of the objects is defective. Moreover, the present invention is concerned with an inspection machine that detects a defective semiconductor circuit pattern formed on a semiconductor wafer according to said defect inspecting method.
2. Description of the Related Art
The present invention refers to an image processing method and apparatus that compare corresponding portions of two images derived from two objects, which are supposed to be identical to each other, with each other, and that if the difference between the corresponding portions is large, judge that one of the objects is defective. Herein, a description will be made by taking for instance an inspection machine that detects a defective semiconductor circuit pattern formed on a semiconductor wafer in a semiconductor manufacturing process. However, the present invention is not limited to the inspection machine.
Moreover, generally, inspection machines are realized with bright-field inspection machines that illuminate the surface of an object with light falling on the surface vertically and capture an image represented by reflected light. Dark field inspection machines that do not directly capture illumination light are also in use. In the case of the dark field inspection machine, the surface of an object is illuminated with light falling on the surface obliquely or vertically, and a sensor is disposed so that it will not detect regularly reflected light. Spots illuminated with illumination light are successively scanned in order to produce a dark field image of the surface of the object. Some dark field inspection machines do not employ an image sensor. The present invention encompasses this type of dark field inspection machine. The present invention is adaptable to any image processing method and apparatus which compare corresponding portions of two images (image signals) derived from two objects that are supposed to be identical to each other. Herein, if the difference is large, the image processing method and apparatus judge that one of the objects the images is defective.
In a semiconductor manufacturing process, numerous chips (dice) are formed on a semiconductor wafer. Each die is patterned in multiple layers. The completed die is electrically inspected using a prover and a tester, and a defective die is excluded from an assembling step. What counts greatly in the semiconductor manufacturing process is the yield. The result of the electric inspection is fed back to the manufacturing process and reviewed for the management of processing steps.
However, the semiconductor manufacturing process comprises numerous steps. It takes much time until the electric inspection is started after the start of manufacture. Consequently, when the electric inspection reveals that the semiconductor manufacturing process has a drawback, numerous wafers have already been processed. The result of the electric inspection cannot be fully utilized for improvement. Therefore, defective pattern inspection is performed for inspecting every pattern formed at an intermediate step so as to detect a defect. If the defective pattern inspection is performed at a plurality of steps among all steps, a defect occurring after a previous inspection can be detected. The result of inspection can be swiftly fed back for management of the steps processing.
In currently available inspection machines, a semiconductor wafer is illuminated, and an image of a semiconductor circuit pattern is optically captured in order to produce an electric image signal. The electric image signal is converted into a multi-valued digital signal (digital gray-scale level signal). A difference signal (gray-scale level difference signal) is produced. Herein, the difference signal represents a difference from a gray-scale level signal that represents a reference pattern. When the difference is larger than a predetermined threshold, the pattern is judged to be defective (refer to, for example, Japanese Unexamined Patent Publication No. 2000-323541). The reference pattern is generally a pattern formed on an adjoining die or an adjoining identical pattern.
FIG. 6 schematically shows the configuration of a conventional semiconductor device inspection machine. Referring to the drawing, an inspection machine 10 comprises: a high-precision xy stage 11 that is movable in the x and y directions; a wafer chuck 12 fixed to the xy stage 11; an objective lens 14; a tube lens 15; a beam splitter 16; a light source 17; an optical imaging unit 18 such as a TDI camera; an analog-to-digital (A/D) converter 19; an image memory 20; an image alignment unit 22; a difference detector 26; a defect judgement unit 27; and an output unit 28.
When the xy stage 11 moves a wafer 13, the imaging unit 18 scans the wafer 13 in the x and y directions and images the entire surface of the wafer 13. FIG. 7 illustrates the wafer 13. As illustrated, numerous dice 31 and 33 etc. having substantially the same pattern formed thereon are formed orderly on the wafer 13. The imaging unit 18 scans the wafer 13 as indicated with arrows in the drawing so as to image the surface of the wafer. A image signal produced by the imaging unit 18 is analog-to-digital converted and then stored in the image memory 20.
FIG. 8A illustrates part of a produced image representing the surface of the wafer 13 having been analog-to-digital converted. An area surrounded with a dot line in the drawing corresponds to the part of the image. As illustrated, the image is divided into image (41a to 41h) having a certain number of pixels. In units of the image domain, a produced image of an inspected pattern is aligned with a produced image of a reference pattern. The alignment will be described later.
If the imaging unit 18 is a line sensor, the number of pixels in the Y direction within the image domain 41 generally agrees with the number of pixel locations in the imaging unit 18. The number of pixels constituting the image domain 41 is, for example, a product of 2048 pixels by 2048 pixels.
FIG. 8B illustrates an image corresponding to one image domain 42. As illustrated, the image domain is divided into frames (43a, 43b, 43c, . . . 43h) having a certain number of pixels. The aforesaid defect inspection is performed in units of one frame. For the defect inspection, a difference-in-gray-scale level signal representing a difference in a gray-scale level between a frame (inspected frame) included in a produced image of an inspected die and a frame (reference frame) included in a produced image of a reference die is produced. The number of pixels constituting one frame is, for example, a product of 2048 pixels by 64 pixels.
Generally, the dimension of each image domain 41 is defined differently from the dimension of each die 31. Therefore, in order to produce the difference-in-gray-scale level signal representing the difference in a gray-scale level between the inspected frame and reference frame, it must be sensed in what image domains 41 the frames belong and at what positions in image domains 41 the frames are located. Thereafter, the frames must be aligned with each other. An alignment procedure will be described below.
First, the image alignment unit 22 aligns an inspected frame and a reference frame with each other by shifting the frames to a degree measured in units of a pixel. This alignment shall be referred to as pixel alignment. In the pixel alignment, first, a database containing semiconductor circuit design CAD data is referenced in order to determine in what image domains 41 the inspected frame and reference frame belong. For example, a die 31 shall be a reference die, and a die 33 shall be an inspected die. A domain 41a containing a left upper domain of an image representing the reference die 31, and a domain 41c containing a left upper domain of an image representing the inspected die 33 are determined (see FIG. 8A).
Thereafter, an alignment reference image 45 corresponding to a predetermined position in the image of the reference die 31 is sampled from the domain 41a. An image 45a having the same size as the alignment reference image 45 and corresponding to a predetermined position in the image of the inspected die 33 is sampled from the domain 41c. Moreover, several images are sampled from the domain 41c by shifting the image 45a to a degree measured in units of a pixel. For example, an image 45b is sampled by shifting the image 45a by one pixel in the x direction, and an image 45c is sampled by shifting the image 45a by one pixel in the −x direction. FIG. 9A illustrates the alignment reference image 45 that is a portion of the image produced to represent the reference die 31. FIG. 9B illustrates the images 45a to 45c that are portions of the image produced to represent the inspected die 33.
The thus sampled images (45a to 45c) corresponding to portions of the image of the inspected die 33 are compared with the alignment reference image 45. An image exhibiting the largest correlation value is selected from these sampled images. The position of the selected image within the domain 41c is compared with the position of the alignment reference image 45 within the domain 41a. Thus, a degree of pixel alignment to which the image of the reference die 31 that belongs to the image domain 41a and the image of the inspected die 33 that belongs to the image domain 41c are shifted to align with each other is calculated.
Thereafter, the image alignment unit 22 performs alignment by causing a shift to a degree measured in units of one pixel or less. This alignment shall be referred to as sub-pixel alignment. The sub-pixel alignment will be described below. Herein, an image that is found to most highly correlate with the alignment reference image 45 during the pixel alignment shall be the image 45a. 
The image alignment unit 22 produces an image 45a′ by shifting the image 45a by a predetermined skip width (for example, to a degree measured in units of 0.1 pixel) ranging from −0.5 pixel to 0.5 pixel. This is shown in FIG. 10A to FIG. 10D.
A description will be given of a procedure for producing an image by shifting the image 45a shown in FIG. 10A by 0.2 pixel in the x direction. Squares shown in FIG. 10A indicate pixels constituting the image 45a. Numerals written in the squares represent the gray-scale levels exhibited by the respective pixels.
When the image 45a is shifted in the x direction, two pixel values adjoining in the x direction are generally used to interpolate the values of intermediate pixels by the number of which the position of one pixel is shifted. For example, referring to FIG. 10B, linear interpolation is adopted. Specifically, a linear function of a pixel value to a center position of a pixel is obtained from the spatial positions of the centers of the adjoining pixels 46a and 46b and their pixels values. Based on the linear function, a pixel value of which the pixel 46a assumes with the center position thereof shifted by 0.2 pixel is calculated. The pixel value shall be regarded as a pixel value of an associated pixel 46a′ in an image produced by shifting the image 45a by 0.2 pixel. FIG. 10C shows the image 45a produced by shifting the image 45a shown in FIG. 10A by 0.2 pixel in the x direction.
A correlation value indicating the degree of correlation between each image 45a′ produced by shifting the image 45a by a predetermined skip width and the alignment reference image 45 is calculated. Consequently, an approximate expression indicating the relationship between a degree of sub-pixel alignment and the correlation value is obtained (see FIG. 10D). A degree of sub-pixel alignment associated with the largest correlation value shall be adopted. Thus, a degree of sub-pixel alignment is calculated as a degree to which the image of the inspected die 33 belonging to the image domain 41c must be shifted to align with the image of the reference die 31 belonging to the image domain 41a. 
The image alignment unit 22 performs the pixel alignment and sub-pixel alignment in both the x and y directions, and transmits calculated degrees of alignment to the difference detector 26. Based on the received information, the difference detector 26 samples an inspected frame contained in an image of an inspected pattern and an associated frame contained in an image of a reference pattern from the images stored in the image memory 20. Consequently, a difference-in-gray-scale level signal is produced relative to each of the pixels constituting the frames.
Based on the difference-in-gray-scale level signal produced by the difference detector 26, the defect verification unit 27 judges whether the difference-in-gray-scale level signal exceeds a predetermined threshold. If the signal exceeds the threshold, the signal is transmitted as a defective signal to the output unit 28.
By the way, as far as the foregoing conventional inspection machine is concerned, when a difference-in-gray-scale level signal is produced, a difference in a gray-scale level is corrected using information on adjoining pixels. This shall be referred to as edge factor handling. The edge factor handling is intended to minimize a possibility of incorrect detection of a defect. Specifically, an image of an edge of a pattern exhibits a greatly varying gray-scale level and suffers a larger noise than that of any other part thereof. The difference in a gray-scale level of the image of the pattern edge is therefore large. Consequently, the difference in a gray-scale level of the pattern edge image is detected as a smaller value.
For the edge factor handling, a differential filter is applied to pixels constituting either of an inspected frame and a reference frame in order to detect an image portion whose gray-scale level varies greatly, that is, a pattern edge image. The difference in a gray-scale level of the detected pattern edge image is reduced. A concrete procedure will be described below.
The differential filter shall be a 3×3 filter. A typical 3×3 filter is expressed as a formula (1) below where coefficients ai,j are predetermined constants.
                    A        =                  (                                                                      a                                                            -                      1                                        ,                                          -                      1                                                                                                                    a                                                            -                      1                                        ,                    0                                                                                                a                                                            -                      1                                        ,                    1                                                                                                                        a                                      0                    ,                                          -                      1                                                                                                                    a                                      0                    ,                    0                                                                                                a                                      0                    ,                    1                                                                                                                        a                                                            -                      1                                        ,                                          -                      1                                                                                                                    a                                      1                    ,                    0                                                                                                a                                      1                    ,                    1                                                                                )                                    (        1        )            
FIG. 11A shows a pixel 47 whose position is represented with coordinates (x, y) and surrounding pixels as well as their pixel values. As shown in FIG. 11A, the pixel value of a pixel at coordinates (x, y) shall be GL(x, y).
Assuming that the filter A is applied to the pixel 47, a pixel value A*GL(x, y) resulting from the filtering is calculated based on the own pixel value GL(x, y) and the surrounding pixel values according to a formula (2):
                              A          *                      GL            ⁡                          (                              x                ,                y                            )                                      =                              ∑                          i              =                              -                1                                      1                    ⁢                                    ∑                              j                =                                  -                  1                                            1                        ⁢                                          a                                  i                  ,                  j                                            ⁢                              GL                ⁡                                  (                                                            x                      +                      j                                        ,                                          y                      +                      i                                                        )                                                                                        (        2        )            
A differential filter dX for the x direction and a differential filter dY for the y direction are expressed as formulas (3) and (4) below. Assume that the x-direction differential filter expressed as the formula (3) is applied to each of pixels constituting an image shown in FIG. 12B. The image shown in FIG. 12B is an image in which pixel values vary in the x direction between a column starting with a pixel 48 and a column starting with a pixel 49, that is, within an edge pattern image.
                              ⅆ          X                =                  (                                                                      -                  1                                                            0                                            1                                                                                      -                  2                                                            0                                            2                                                                                      -                  1                                                            0                                            1                                              )                                    (        3        )                                          ⅆ          Y                =                  (                                                                      -                  1                                                                              -                  2                                                                              -                  1                                                                                    0                                            0                                            0                                                                    1                                            2                                            1                                              )                                    (        4        )            
FIG. 12C shows an image that has undergone filtering. As shown in the drawing, pixel values are large only in the columns starting with the pixels 48 and 49, that is, in the edge pattern image. Consequently, the edge pattern image is detected. A difference in a gray-scale level ΔGL of each pixel is corrected according to a formula (5) below, whereby a difference in a gray-scale level of the edge image is minimized. In this case, a magnitude of correction is expressed as a formula (6).corrected difference in a gray-scale level=ΔGL×(1−A(|dX*GL(x, y)|+|dY*GL(x, y)|))  (5)Magnitude of correction=ΔGL·A(|dX*GL(x, y)|+|dY*GL(x, y)|)  (6)
Herein, GL(x, y) denotes a pixel value of a pixel at a position represented with coordinates (x, y), and A denotes a predetermined constant of proportion.
An edge factor handler 23 applies a differential filter 24 (including the x-direction and y-direction differential filters dX and dY) to each of pixels constituting an inspected image. Consequently, a variation in a gray-scale level of each pixel relative to that of other pixel adjoining in the x or y direction, that is, a differentiated value of each pixel value (differential-filtered value) is calculated. An image portion exhibiting a large differentiated value (change), that is, a pattern edge image is detected.
Moreover, a magnitude-of-correction determination unit 25 determines a magnitude of correction, by which a difference in a gray-scale level detected by the difference detector 26 is corrected, according to the formula (6) on the basis of a filtered image provided by the edge factor handler 23.
When dice are patterned, a condition for exposure or a condition for etching differs slightly from die to die. An edge of one die is therefore finished slightly differently from that of other die. An edge of a pattern formed on one die is therefore different from that of a pattern formed on other die. This is thought to be a main factor of the noise contained in an image of the pattern edge. The conventional edge factor handling for reducing a difference in a gray-scale level of a pattern edge image is based on this idea.
However, a pattern edge image suffers not only the above noise attributable to an error of the pattern edge but also a noise attributable to the sub-pixel alignment. This is because whatever interpolation is employed in the sub-pixel alignment, a gray-scale level of a pattern edge image resulting from the sub-pixel alignment differs from that of an actually produced image of a pattern edge.
An error occurring in correction of the sub-pixel alignment depends on a degree of sub-pixel alignment. However, conventionally, a difference in a gray-scale level is uniformly corrected (through edge factor handling) according to the formula (5).
Therefore, conventionally, the edge factor handler 23 is designed so that it will not detect a defect despite the largest possible noise that is predicted in consideration of an error in correction of the sub-pixel alignment. This degrades the sensitivity in detecting a defective pattern edge image.