Digital image analysis is used for many practical purposes including, industrial automation, consumer electronics, medical diagnosis, satellite imaging, photographic processing, traffic monitoring, security, etc. In industrial automation, for example, machine vision systems use digital image analysis for automated product inspection, robot guidance, and part identification applications. In consumer electronics, for example, the common optical mouse uses digital image analysis to allow a human to control a cursor on a personal computer screen.
To service these applications, digital images are captured by a sensor, such as an optoelectronic array of photosensitive elements called pixels, and analyzed by a digital information processing device, such as a digital signal processor (DSP) or a general purpose computer executing image analysis software, to extract useful information. One common form of digital image analysis is feature detection. Physical features on objects in the field of view of the sensor (object features) give rise to patterns in the digital images (image features) that can be analyzed to provide information about those object features and the objects that contain them. Example object features might include edges, corners, holes, ridges, and valleys, which give rise to changes in surface depth, orientation, and reflectance. These changes in turn interact with illuminating radiation to produce the image features.
Image features can be detected using many well-known methods, for example edge detection, matched filters, connectivity, Hough transforms, and geometric pattern matching.
Typically, feature detection in digital image analysis is a static process, by which is meant, generally, that features are detected within a single digital image captured at a particular point in time. Equivalently, to reduce noise, static feature detection can be used on an image that is the average of a plurality of images captured from a stationary scene.
In a typical static feature detection system, a one- or two-dimensional digital image of a scene is captured by any suitable means. The image is then analyzed by software implementing a static feature detection technique to identify image features, which comprise a set of attributes that represent measurements of physical properties of corresponding object features. In a two-dimensional edge detection system, for example, edge attributes may comprise a position, an orientation and a weight. The position estimates the location of the edge within the image, and may be determined to sub-pixel precision by well-known means. The orientation estimates the angle of the edge at the estimated position. The weight is an estimate of edge strength and can be used to provide a measure of confidence that the edge truly corresponds to a physical feature in the field of view and not to some artifact of instrumentation. Typically, an edge is considered to exist only if its weight exceeds some value herein called a detection threshold.
In a one-dimensional edge detection system, for example, position and weight can be similarly estimated and generally have the same meaning, but orientation is replaced with polarity, which is a two-state (binary) value that indicates whether the edge is a light-to-dark or dark-to-light transition.
There are a large number of well-known static edge detection systems, including those of Sobel and Canny. Another exemplary 2D static edge detection technique is described in U.S. Pat. No. 6,690,842, entitled APPARATUS AND METHOD FOR DETECTION AND SUB-PIXEL LOCATION OF EDGES IN A DIGITAL IMAGE, by William Silver, the contents of which are hereby incorporated by reference. Generally the literature describes two-dimensional methods, with one-dimensional being a special and simpler case. Static edge detection techniques may utilize gradient estimation, peak detection, zero-crossing detection, sub-pixel interpolation, and other techniques that are well known in the art.
Another example of static feature detection is the Hough transform, described in U.S. Pat. No. 3,069,654 entitled METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS, and subsequently generalized by others. For a Hough transform designed to detect lines in a 2D image, for example, the feature attributes might include position and orientation.
Yet another example of static feature detection is connectivity analysis, where feature attributes might include center of mass, area, and orientation of the principal axes.
All of the information estimated by a static feature detection technique is limited in accuracy and reliability by the resolution and geometry of the pixel grid. This is because the exact alignment between the pixel grid and the physical features that give rise to image features is essentially an accident of the process by which objects or material are positioned in the field of view at the time that an image is captured. Edge weight, for example, varies significantly depending on this accidental alignment, which can result in failing to detect a true edge or falsely detecting an instrumentation artifact. This is particularly likely for edges at the limits of the resolution of the pixel grid-detection of such edges, whether real or artificial, is at the whim of their accidental alignment with the pixel grid.
Position estimates are subject to the same whims of accidental alignment. A competent static edge detector might estimate the position of a strong, well-resolved edge to about ¼ pixel, but it is difficult to do much better. For weaker or inadequately-resolved edges, the accuracy can be substantially worse.
Static feature detection is used in the common optical mouse to track the motion of the mouse across a work surface. Methods in common use are described in, for example, U.S. Pat. No. 5,578,813, entitled FREEHAND IMAGE SCANNING DEVICE WHICH COMPENSATES FOR NON-LINEAR MOVEMENT, U.S. Pat. No. 5,644,139, entitled NAVIGATION TECHNIQUE FOR DETECTING MOVEMENT OF NAVIGATION SENSORS RELATIVE TO AN OBJECT, U.S. Pat. No. 5,786,804, entitled METHOD AND SYSTEM FOR TRACKING ATTITUDE, and U.S. Pat. No. 6,433,780, entitled SEEING EYE MOUSE FOR A COMPUTER SYSTEM. A reference pattern is stored corresponding to physical features on the work surface, where the reference pattern is a portion of a digital image of the surface. The reference pattern is correlated with subsequent digital images of the surface to estimate motion, typically using sum of absolute differences for the correlation. Once motion exceeding a certain magnitude is detected, a new reference pattern is stored. This is necessary because the old reference pattern will soon move out of the field of view of the sensor.
Correlation of this sort is a form a static feature detection. The position of the mouse when a new reference pattern is stored is an accidental alignment of the physical features of the surface with the pixel grid of the sensor, and each time a new reference pattern is stored there will be some error. These errors accumulate proportional to the square root of the number of times a new reference pattern is stored, and quickly become rather large. This is generally not a serious problem for an optical mouse, because a human user serves as feedback loop controlling the motion to achieve a desired effect.
There are a large number of applications for which accurate tracking of the motion of objects or material is of considerable practical value, and where traditional methods of digital image analysis, such as that used in an optical mouse, are inadequate. These applications include numerous examples in industrial manufacturing, including for example the tracking of discrete objects for control of an ink jet or laser printer, and the tracking of material in a continuous web. The most commonly used solution is a mechanical encoder attached to a transport drive shaft, but these have many well-known problems including slip between the drive and the material, resulting in inaccuracies. Systems using laser Doppler technology for direct non-contact measurement of surface speed are available, but they are generally expensive and bulky.