Various methods have been proposed and practiced for positioning (also to be referred to as “alignment” hereinafter) between a master, such as a reticle, and a substrate, such as a wafer, in an exposure apparatus. Alignment in the exposure apparatus will be briefly explained with reference to FIG. 6.
Positioning a reticle R with respect to a projection lens UL is called reticle alignment. Reticle alignment is performed by, e.g., driving a reticle stage RS so as to correct a shift between a reticle mark (not shown) on the reticle R and a reference mark RR on the reticle stage RS. More specifically, the reticle mark and the reticle R held on the reticle stage RS and the reference mark RR, which is formed on the reticle stage RS and has a known reference position, are sensed by a camera (C1) 201. The sensed images are processed by an image processor 305 in a camera controller 300 to measure the relative shift amount. A drive control device (SF) 121 drives a reticle driving device (MOT1) 123 so as to correct the measured relative shift amount. Note that the reference mark may exist on a wafer stage WS.
Measuring the positional relationship between a wafer WAF held on the wafer stage WS and a reference position is called wafer alignment. Actual alignment is done by driving the wafer stage WS prior to exposure of each shot on the basis of the measurement result. More specifically, in wafer alignment, the position of an alignment mark (not shown) on the wafer WAF chucked by the wafer stage WS is sensed by a camera (C3) 203 via an off-axis microscope OAS outside the projection lens UL. The sensed image is processed by the image processor 305 of the camera controller 300 to measure the mark position. By measuring the positions of a plurality of marks on the wafer WAF, the position of the wafer WAF on the wafer stage WS and the position of an exposed shot on the wafer WAF can be measured. Note that the microscope may be of a TTL or TTR type, in which a wafer mark is observed via the projection lens UL.
Measuring the relative positions of the reticle stage RS, which holds the reticle R and the wafer stage WS, which holds the wafer WAF, is called calibration. More specifically, in calibration, light emitted by a light source (not shown) irradiates a reference mark WR on the wafer stage WS via a reticle reference mark RSM and a projection lens UL. Light containing image information of the reference mark WR passes through the projection lens UL and reticle stage reference mark RSM again and reaches a camera (C2) 202. An image formed on the image sensing plane of the camera (C2) 202 is a synthesized image of the reference mark WR on the wafer stage WS and the reticle reference mark RSM. The image processor 305 processes the synthesized image of WR and RSM to calculate the relative horizontal distance between reference mark WR and reference mark RSM, thereby measuring the relative positional relationship between the reticle stage RS and the wafer stage WS.
If the positional relationship between RR and RSM is known, the positional relationship between the reticle and the wafer is determined by the above measurement, and a pattern on the reticle can be accurately transferred to the wafer.
In this manner, recent semiconductor exposure apparatuses execute various measurements, and the camera used in measurement changes depending on the measurement application. Recently, the exposure performance demanded of the semiconductor exposure apparatus is a line width of 100 nm or less. The measurement precision required for the above-mentioned alignment and position measurement becomes very high.
In order to increase the measurement precision, the microscope magnification may be increased. However, an increase in magnification is restricted, and the measurement precision must be increased even with a large relative shift in calibration measurement, or the like. That is, the camera used in calibration measurement, or the like, must have a wide detection range and high precision. To meet this demand, the range of one image sensing operation must be widened, and a high-pixel-density camera (CCD camera, or the like) must be adopted.
A semiconductor manufacturing apparatus performs measurement using various cameras, as described above, and a camera appropriate for each application must be selected. Cameras must be properly used such that a high-resolution camera with one million to two million pixels is used as the calibration measurement camera (C2) 202, and the standard cameras C1 and C3 with four hundred thousand pixels are used as reticle and wafer alignment cameras. All the cameras may be high-resolution cameras, which is not preferable in consideration of the cost, image transfer time, and the like.
Baseline measurement is an example of installing a camera having a plurality of image sensing areas in a semiconductor exposure apparatus and executing various measurements at a high speed.
In baseline measurement, the relative positions of the reticle stage RS, wafer stage WS, and off-axis microscope OAS are measured. More specifically, baseline measurement can be achieved by the following flow.
<STEP 1>
A camera selector (SEL) 301 of the camera controller 300 is switched to the camera (C2) 202, and the relative horizontal distance between RSM and WR is measured.
<STEP 2>
The camera selector (SEL) 301 of the camera controller 300 is switched to the camera (C3) 203. The wafer stage WS is moved so as to position WR below the off-axis microscope OAS. The WR position is sensed by the camera (C3) 203, and measured based on the sensed image.
<STEP 3>
The positional relationship between RS and the off-axis microscope OAS is determined on the basis of the WR position measured using the camera (C3) 203 and the RSM and WR positions measured using the camera (C2) 202.
<STEP 4>
STEP 1 to STEP 3 are repetitively executed, in order to increase the measurement precision.
As described above, baseline measurement repetitively executes measurement using the camera (C2) 202 and camera (C3) 203. In baseline measurement, the exposure apparatus does not perform any exposure, and a long measurement time decreases the productivity of the exposure apparatus. Hence, baseline measurement must be processed at a high speed.
A problem when the number of pixels (specification) is different between the camera (C2) 202 and the camera (C3) 203 will be explained. FIG. 5A shows the sync signal of the standard camera (C3) 203 with four hundred thousand pixels, and FIG. 5B shows the sync signal of the high-resolution (nonstandard) camera (C2) 202 with two million pixels. Each of sync signals VD and HD supplied to the two cameras is different between the cameras. In baseline measurement, simultaneously when the standard camera (C3) 203 is switched to the nonstandard camera (C2) 202, the output pattern of a sync signal generator (SYNC) 303 must be switched to a pattern for the camera (C2) 202. Simultaneously, when the nonstandard camera (C2) 202 is switched to the standard camera (C3) 203, the output pattern of the sync signal generator (SYNC) 303 must be switched to a pattern for the camera (C3) 203.
In general, a CCD camera often used as the above-described camera cannot obtain a stable video signal during several frames upon a change in sync signal frequency. This problem arises from the presence of a delay in a circuit for detecting the phase of an externally supplied sync signal, resetting the internal operation of the camera, and outputting a video signal.
Switching between the camera (C2) 202 and the camera (C3) 203 cannot be done simultaneously, and requires a non-negligible time. As a result, the camera switching time influences the measurement flow in baseline measurement, prolonging the baseline measurement time.
As a method of solving this problem,
(1) a camera controller having a function of generating two types of sync signals is mounted;
(2) camera controllers equal in number to camera types are mounted; and
(3) instead of supplying a sync signal from an external camera controller and driving each camera, a sync signal is generated in each camera, and the sync signal supplied from the camera is separated by the image processor.
Method (1) is relatively simple, but is a special function. To cope with a larger number of different cameras, the sync signal generator becomes complicated, changing the cost and scale of the camera controller.
Method (2) requires camera controllers equal in number to camera types, and increases the cost and mounting space. Since the camera (C2) 202 and camera (C3) 203 are not simultaneously used, only either one of the cameras is wastefully operated by a plurality of camera controllers. In other words, each camera controller is less valuable.
In method (3), the sync separation circuit of the image processor must wait until phase detection, or the like, stabilizes every time the camera speed changes. The same problem as that generated in the camera side also occurs.