Machine vision systems use image acquisition devices that include camera sensors to deliver information related to a viewed object/surface. The system then interprets this information according to a variety of algorithms to perform a programmed decision-making, alignment, and/or identification function.
By way of background, FIG. 1 shows an exemplary vision system 110 that includes an optical camera lens 120 and associated one-dimensional (1D) or two-dimensional (2D) image sensor pixel array 121. These components 120, 121 are arranged to acquire an image in the form of matrix of pixel data. The field of view 122 defines the boundaries 126 of the acquired image, which in this example, contains an object or surface 124 with features that the system 110 desires to analyze. The field of view 122 can extend beyond the object 124 as shown by the associated boundaries. The acquired image data 140, in the form of color pixels or grayscale pixel intensity values is processed by an image processor 130, and transferred to another device, (not shown). One example of a connected device is a digital signal processor (DSP) that is adapted to decode symbology. The image processor can include hardware functions and/or program instructions that perform a variety of processing tasks. For example, the image processor can perform feature detection using, for example, blob analysis, edge detection and other conventional vision system operations. The features can be further processed by the image processor 130 to provide, for example, pose and alignment data, inspection results, object recognition results or other useful image data. The results can be used internally to provide alarms and signals, or can be transferred to another device or system. For example, ID features from a barcode acquired by the system 110 can be transferred to a dedicated digital signal processor (DSP) to decode the features into an alphanumeric character string using conventional decoding functions.
The pixel data acquired from an entire field of view 122 can be relatively large from a processing overhead standpoint. In general, this pixel data is read-out from the sensor to a data memory associated with the processor, which then performs image processing functions on the image data. Where the processor is a graphics processing unit (GPU) or a digital signal processor DSP, it is typically required to sort through an entire set of image data from a captured image in order to derive the desired result(s). This typically requires the device to handle a very large quantity of data, and thus can invoke significant processing overhead. Generally, the device (GPU, DSP, etc.) may have a relatively small, directly accessible cache memory requiring that image data be moved out of the cache relative to a larger off-die random access memory (RAM). This movement of data is expensive in terms of processing overhead. These various issues lead to a scenario in which the DSP may be too overloaded with image data to meet required throughput rates and other system parameters.
Notably, the imaged object or surface 124 typically represents a smaller amount of image data than that contained within the image of the overall field of view 122. Within the smaller image data of the object/surface 124, the features of interest 150 (e.g. a barcode, part, bolt hole, etc.) can be an even smaller subset of the overall image data. Thus, the actual image data that is useful for further processing by the DSP or other processing device may in fact be a much smaller subset of the overall image data.
It is, therefore, desirable to provide a system and method that identifies useful image data within an acquired set of overall image data, so that this data can be processed more quickly and efficiently by the processors employed in the vision system.