This patent document contains information subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent, as it appears in the U.S. Patent and Trademark Office files or records, but otherwise reserves all copyright rights whatsoever.
1. Field of the Invention
This invention relates to inspection systems and methods for machine vision applications, and more particularly relates to techniques and systems that perform object feature detection and analysis.
2. Background
The simplest kinds of images that can be used for machine vision are simple two-dimensional shapes or xe2x80x9cblobsxe2x80x9d. Blob analysis is the detection and analysis of two-dimensional shapes within images. Blob analysis can provide a machine vision application with information about the number, location, shape, and orientation of blobs within an image, and can also provide information about how blobs are topologically related to each other.
Since blob analysis is fundamentally a process of analyzing the shape of a closed object, before blob analysis can be performed on an image, the image is segmented into those pixels that make up the blob being analyzed, and those pixels that are part of the background. Images used for blob analysis generally start out as grey-scale images of scenes. While it might be easy for a human observer to identify blobs or objects within the scene, before blob analysis can analyze an image, each pixel in the image is assigned as an object pixel or a background pixel. For example, object pixels may be assigned a value of 1, while background pixels are assigned a value of 0.
A number of techniques can be used to segment images into object pixels and background pixels. Such techniques include hard thresholding and soft thresholding segmentation techniques, pixel mapping and analysis using threshold images. However, when blob analysis of the resulting segmented image is performed, the results of the analysis are often degraded by spatial quantization error. Spatial quantization error results from the fact that the exact edge of an object in a scene rarely falls precisely at the boundary between two pixels in an image of that scene. The pixels in which the edge of the blob falls have some intermediate pixel value. Depending on how much of the object lies on the pixel, the pixel is counted as an object pixel or a background pixel. As a result, a very small change in the position of a blob can result in a large change in the reported position of the blob edge.
Spatial quantization error can affect the size, perimeter, and location that are reported for a blob. The severity of spatial quantization errors depends on the ratio of perimeter to area in the image; the greater the ratio of perimeter to area, the greater the effect of the errors. Thus, spatial quantization error increases with ratio of blob perimeter to blob area. Edges that are aligned with the pixel grid, such as those of rectangles, tend to produce systematic reinforcing errors, while other edges, such as those of round objects, tend to produce random canceling errors.
Once an image has been segmented into object pixels and background pixels, connectivity analysis must be performed to assemble object pixels into connected groups of object pixels or blobs. There are three types of connectivity analysis: whole image connectivity analysis, connected blob analysis and labeled connectivity analysis.
Connected blob analysis uses connectivity criteria to assemble the object pixels within the image into discrete, connected blobs. Conventionally, connectivity analysis is performed by joining all contiguous object pixels together to form blobs. Object pixels that are not contiguous are not considered to be part of the same blob.
Once an image has been segmented, and the blob or blobs have been located and identified, an application can begin to consider information about the blob or blobs. A blob is an arbitrary two-dimensional shape. The shape of a blob can be described using a number of different measures. The measures that blob analysis provides may include geometric properties, non-geometric properties and topological properties.
Connected blob analysis relies on grey-level thresholds (and possibly morphological operations) to segment an image into blob and non-blob areas. One use of connected blob analysis involves an operator setting a xe2x80x9chardxe2x80x9d threshold. Such a threshold can be effective when the desired feature""s grey-level differs significantly from the background, and/or the background and blob have uniform intensity. Connected blob analysis runs into problems when either the background or the blob are textured. Non-uniform intensity can also cause problems. Furthermore, the conventional connected blob analysis does not provide a Within-Group-Variance (WGV) strategy to automatically compute the threshold.
Various forms of blob analysis is routinely performed by conventional machine vision inspection systems. Conventional machine vision inspection systems come in two varieties: those in which the shape of defects with respect to a known pattern (whether edge data or grey-level data) is known, and xe2x80x9cblank scene inspectionxe2x80x9d, in which the shape of objects and/or object features is unknown until runtime. Blank scene inspection can involve two different techniques: detecting unexpected edges and detecting pixels with inappropriate grey-levels.
The former technique, detecting unexpected edges, takes advantage of known background information to ignore xe2x80x9cexpected edgesxe2x80x9d. The unexpected edges are collected into separate features, and the tool provides measurements about each feature. Edge detection involves picking magnitude thresholds and/or edge chain length thresholds. One problem of edge-based xe2x80x9cblank scenexe2x80x9d machine vision systems occurs when the edges do not form a closed connected boundary. In this case, it is difficult to guarantee that the correct edges have been joined into boundaries.
The latter technique, detecting pixels with inappropriate grey-levels, is a form of blob analysis.
Both grey level-based techniques and edge-based techniques have advantages and disadvantages. Conventional blob analysis inspection tools suffer significant drawbacks when detecting objects and/or object features whose appearance is relatively unknown until runtime.
For example, some conventional blob analysis techniques are too fragile because they rely directly on grey-levels and grey-level thresholds. Although grey level-based methods always provide a region and a closed boundary, it is often difficult to select a good threshold and the selected region may not be the region of interest. Furthermore, if the threshold is automatically computed using the image, repeated acquires on the image data may induce different thresholds, and the computed region may fluctuate.
Alternatively, conventional edge detection tools that use edge chains do not necessarily form closed connected boundary contours. Edges are usually more stable with respect to illumination and scene changes than grey level measures. Furthermore, edge information is completely determined from the local neighborhood, not global information. However, edge boundaries are not guaranteed to be closed contours, and edges can disappear if the transition becomes too dim. As a result, both the technique of detecting unexpected edges and the technique of detecting pixels with inappropriate grey levels have deficiencies.
The present invention is provided to improve object or object feature (e.g., character) recognition techniques. More specifically, improved methods are presented to overcome these limitations by providing systems and methods for visual inspection designed to detect and report object and/or object feature shapes from an acquired image of an inspected sample-object using both edge-based and grey level-based data. The exemplary embodiment of the invention is particularly useful for detecting objects and/or object features whose appearances are relatively unknown until runtime.
In one exemplary embodiment, the blob boundary information and edge-based information are processed so that detected object and/or feature data includes the edge chain data whenever they are sufficiently close to the blob boundary data. Otherwise, if there are no nearby edges, that exemplary embodiment may use the blob boundary data.