Precision machine vision inspection systems (or “vision systems” for short) can be utilized to obtain precise dimensional measurements of inspected objects and to inspect various other object characteristics. Such systems may include a computer, a camera and optical system, and a precision stage that is movable in multiple directions to allow workpiece inspection. One exemplary prior art system, that can be characterized as a general-purpose “off-line” precision vision system, is the commercially available QUICK VISION® series of PC-based vision systems and QVPAK® software available from Mitutoyo America Corporation (MAC), located in Aurora, Ill. The features and operation of the QUICK VISION® series of vision systems and the QVPAK® software are generally described, for example, in the QVPAK 3D CNC Vision Measuring Machine User's Guide, published January 2003, and the QVPAK 3D CNC Vision Measuring Machine Operation Guide, published September 1996, each of which is hereby incorporated by reference in their entirety. This type of system is able to use a microscope-type optical system and move the stage so as to provide inspection images of either small or relatively large workpieces at various magnifications.
Machine vision inspection systems generally utilize automated video inspection. U.S. Pat. No. 6,542,180 (the '180 patent) teaches various aspects of such automated video inspection and is incorporated herein by reference in its entirety. As taught in the '180 patent, automated video inspection metrology instruments generally have a programming capability that allows an automatic inspection event sequence to be defined by the user for each particular workpiece configuration. This can be implemented by text-based programming, for example, or through a recording mode which progressively “learns” the inspection event sequence by storing a sequence of machine control instructions corresponding to a sequence of inspection operations performed by a user, or through a combination of both methods. Such a recording mode is often referred to as “learn mode” or “training mode.” Once the inspection event sequence is defined in “learn mode,” such a sequence can then be used to automatically acquire (and additionally analyze or inspect) images of a workpiece during “run mode.”
The machine control instructions including the specific inspection event sequence (i.e., how to acquire each image and how to analyze/inspect each acquired image) are generally stored as a “part program” or “workpiece program” that is specific to the particular workpiece configuration. For example, a part program defines how to acquire each image, such as how to position the camera relative to the workpiece, at what lighting level, at what magnification level, etc. Further, the part program defines how to analyze/inspect an acquired image, for example, by using one or more video tools.
Video tools (or “tools” for short) may be used in inspection and/or machine control operations. Video tools are an important and well known operating and programming aid that provide image processing and inspection operations for non-expert users of precision machine vision inspection systems. Video tools are discussed, for example, in the previously incorporated '180 patent, as well as in U.S. Pat. No. 7,627,162, which is hereby incorporated herein by reference in its entirety. During learn mode, their set-up parameters and operation can be determined for specific portions or regions of interest on a representative workpiece, often called “training” the video tool, and recorded for inspecting similar workpieces automatically and reliably. Set-up parameters may typically be configured using various graphical user interface widgets and/or menus of the vision inspection system software. Such tools may include, for example, edge/boundary detection tools, autofocus tools, shape or pattern matching tools, dimension measuring tools, and the like. For example, such tools are routinely used in a variety of commercially available machine vision inspection systems, such as the QUICK VISION® series of vision systems and the associated QVPAK® software, discussed above.
One known type of video tool is a “multipoint tool” or a “multipoint autofocus tool” video tool. Such a tool provides Z-height measurements or coordinates (along the optical axis and focusing axis of the camera system) derived from a “best focus” position for a plurality of subregions at defined X-Y coordinates within a region of interest of the tool, such as determined by an autofocus method. A set of such X, Y, Z coordinates may be referred as point cloud data, or a point cloud, for short. In general, according to prior art autofocus methods and/or tools, the camera moves through a range of positions along a z-axis (the focusing axis) and captures an image at each position (referred to as an image stack). For each captured image, a focus metric is calculated for each subregion based on the image and related to the corresponding position of the camera along the Z-axis at the time that the image was captured. This results in focus curve data for each subregion, which may be referred to simply as a “focus curve” or “autofocus curve.” The peak of the focus curve, which corresponds to the best focus position along the z-axis, may be found by fitting a curve to the focus curve data and estimating the peak of the fitted curve. Variations of such autofocus methods are well known in the art. For example, one known method of autofocusing similar to that outlined above is discussed in “Robust Autofocusing in Microscopy,” by Jan-Mark Geusebroek and Arnold Smeulders in ISIS Technical Report Series, Vol. 17, November 2000. Another known autofocus method and apparatus is described in U.S. Pat. No. 5,790,710, which is hereby incorporated by reference in its entirety.
Accuracies in the micron or sub-micron range are often desired in precision machine vision inspection systems. This is particularly challenging with regard to Z-height measurements. A particular problem arises when determining a set of Z-height measurements across the surface of a workpiece, as in a multipoint tool. The Z-height accuracy and reliability may be poor for at least some of the data points in a region of interest, for a number of reasons. As a first example, when the surface in the region of interest is strongly curved (e.g., the surface of an IC ball grid array solder ball), some parts of the surface are at an extreme angle of incidence, such that they return little light through the optical system and are underexposed in the autofocus images, whereas other parts of the surface may have a small angle of incidence and be highly reflective such that they return excessive light through the optical system and are overexposed in the autofocus images. No single image exposure is suitable for all parts of such a region of interest. Underexposed and overexposed subregions exhibit low contrast and/or high image noise. Commonly assigned U.S. Pre-Grant Publication No. 2011/0133054 (the '054 Publication), which is hereby incorporated herein by reference in its entirety, discloses an embodiment of a multipoint tool, and a method for characterizing Z-height measurements (e.g., in point cloud data) which may have poor reliability due to low contrast and/or high image noise.
It is known to overcome the aforementioned problem by providing a plurality of autofocus image stacks, wherein each image stack is acquired using a different exposure level. The best focus position for a particular subregion may then be determined in the particular image stack where that subregion is most properly exposed. One such system is available from Alicona Imaging GmbH, of Grambach/Graz, Austria. However, the Alicona system is a specialized surface mapping system aimed at this particular problem, and it uses special hardware and lighting that may not be available in general purpose precision machine vision inspection systems. Such special purpose systems do not provide a sufficiently versatile solution for determining when to use a plurality of autofocus image stacks having different exposure levels, or for determining specific exposure levels, in the context of programming for a general purpose machine vision inspection system. Conversely, image fusion methods which are known for constructing composite photographs having extended depth of field and extended dynamic range (that is, extended exposure range), are aimed at imaging and not precise Z-height measurement. It is not clear how such methods may be implemented in the context of a video tool that may be reliably operated by a non-expert user to program a general purpose machine vision inspection system for Z-height measurement, nor how they would provide acceptable throughput and reliability for the purpose of industrial measurement applications.
For application to general purpose precision machine vision inspection systems, it is a particular problem that the various multipoint measurement operations and image processing methods incorporated into the system must often be adapted and operated for optimal throughput and reliability based on the characteristics of a particular work piece by non-expert users, that is, users who are not skilled in the field of imaging and/or image processing. Thus, according to the consideration outlined above, there is a need for a multipoint Z-height video tool for a machine vision system which may be comprehended and operated by non-expert users to provide an appropriate number of autofocus image stacks having different exposure levels when necessary, and determines the exposure levels needed, in the context of programming for a general purpose machine vision inspection system.