1. Field of the Invention
The present invention relates to an image processing method, an image processing device and a bonding apparatus that includes such an image processing device and more particularly to a method and device that detects the position of an object of detection by way of performing pattern matching of the object of detection and a reference image.
2. Prior Art
In image processing techniques, a pattern matching is widely used. The pattern matching generally uses a portion of a reference image (constituting a known image) as a template image in order to detect the position of an object of detection by detecting the position of this known image contained in the image of the object of detection.
A position detection method that uses such pattern matching will be described with reference to a wire bonding apparatus, which is a semiconductor assembly apparatus, as an example.
In a wire bonding apparatus, wires, typically metal wires, are bonded so that bonding pads consisting of, for instance, aluminum on the surfaces of semiconductor chips are connected to leads consisting of conductors that are formed so that these leads surround the semiconductor chips. However, prior to this bonding operation, the bonding points, which are the points where bonding is performed, are calculated utilizing pattern matching.
First, as shown in FIG. 14, respective alignment points 32a which are reference points used for positioning are registered. In a wire bonding apparatus of the type shown in FIG. 1 in which a camera that is fastened to an XY table via a bonding head and a camera arm is moved in the horizontal direction relative to a semiconductor chip by the operation of the XY table, such alignment points are registered in the following manner: the visual field is moved by moving the XY table 1 to which the camera 7 is fastened via the bonding head 2 and camera arm 6 while an image from the camera 7 that has imaged the semiconductor chip 14a is displayed on the display screen of a monitor 39, so that the center point 32a of cross marks 32 that indicate the center of the visual field displayed on the display screen of the monitor 39 are aligned with an arbitrary point on the semiconductor chip 14a, and an input operation involving the pressing of the input switch of a manual input means 33, etc., is performed, and a region that is surrounded by rectangular reticle marks 42 and is centered on the center point 32a in this case is stored in memory as a template image, and the coordinates on the XY table 1 in this case are stored in a data memory 36 as alignment points.
Generally, in order to minimize detection error, two places are selected for the pad side (Pa1x, Pa1y), (Pa2x, Pa2y), and two places are selected for the lead side (La1x, La1y), (La2x, La2y), as the alignment points from a diagonal line in the vicinity of the corners of the semiconductor chip 14a. 
Next, when the center point 32a of the cross marks 32 is aligned with appropriate positions on the individual pads P and leads L, typically with points that are located substantially in the centers of the respective pads P and leads L; and then the input switch is pressed. Thus, the coordinates of the respective bonding points are stored in the data memory 36.
In run time processing (i.e., processing during actual production of the product), a new semiconductor device 14 that is the object of detection is set, and the XY table 1 is moved by the control of an operating device so that the area in the vicinity of each registered alignment point A0 becomes the visual field of the camera 7 (FIG. 15). An image of the new semiconductor device 14 is acquired by the camera 7. Then, using the registered reference image, the reference image is superimposed on the image of the object of detection in a relative position which is such that the amount of coincidence between the image of the object of detection and the reference image shows a maximum value; and the amount of positional deviation (ΔX, ΔY) between the position coordinates of the center point 32a in this attitude on the XY table 1 and the position coordinates of the alignment point A0 (which is the position of the center point 32a at the time of the previous registration of the template image) on the XY table 1, e.g., (Pa1x, Pa1y), is determined by pattern matching detection. The positional deviation is similarly determined for the remaining alignment points 32a, the calculated amounts of positional deviation (ΔX, ΔY) are added to the position coordinates of the alignment points measured at the time of the previous registration of the template image, e.g., as (Pa1x+ΔX, Paly+ΔY), and the values thus obtained are designated as new alignment points Am. “Am” is a symbol, and “m” referred to here is not a numerical value that has a range.
Next, the positions of the respective pads and leads are determined by calculation from the positions of the new alignment points Am in a form that preserves the relative positions of the respective pads and leads at the time of registration with respect to the alignment points A0 (hereafter, this is referred to as “position correction”), so that the actual bonding points are determined. Then, bonding operations are performed on these actual bonding points.
In cases where the semiconductor device 14 that is the object of detection is disposed in an attitude that includes a positional deviation in the rotational direction, high-precision position correction of the pads P and leads L cannot be accomplished. Such a high-precision position correction cannot be accomplished even if pattern matching detection using a registered reference image is performed.
The inherent reason for this is as follows: namely, if the image of the object of detection and the reference image are superimposed so that the amount of coincidence for the pattern serving as a reference (pads P in FIG. 15) shows a maximum value, the positions of the new alignment points Am stipulated by the relative positions with respect to the pattern serving as a reference should coincide with the positions of the original alignment points A0 similarly stipulated by the relative positions with respect to the pads in the reference image. However, as shown in FIG. 16, when the semiconductor device 14 that is the object of detection is disposed in an attitude that includes a positional deviation in the rotational direction, the original alignment points A0 and new alignment points Am do not coincide even if the image of the object of detection and the reference image are superimposed so that the amount of coincidence shows a maximum value for the pattern serving as a reference (pads P in FIG. 16).
The above problem can be solved by way of setting points, which tend not to be affected by the rotation of the attitude of the semiconductor device 14 that is the object of detection, as the alignment points. However, it is difficult for the operator to find such alignment points.
The error that is caused by this positional deviation in the direction of rotation of the object of detection is not a problem if the pitch between the pads P or between the leads L is sufficiently large. However, such error is a major problem in cases where it is necessary to deal with the finer pitches that have been used in recent years, i.e., finer pitches between the pads P and between the leads L.
Meanwhile, various methods in which pattern matching with the image of the object of detection is performed while the reference image is rotated have been proposed (see Japanese Patent Application Laid-Open (Kokai) No. 9-102039, for instance). In such methods, position detection that takes positional deviation in the direction of rotation into account is possible. However, pattern matching in increments of several degrees must be performed in the direction of rotation for numerous points within the visual field. As a result, the amount of calculations required greatly increases, thus slowing down the recognition speed. Thus, such methods are not practical.