The present invention relates to imaging technology, and more particularly to CCD imaging and signal processing. Such CCD devices have been proposed for pattern recognition and the interpretation of the imaged pattern for both robotic vision systems as well as surveillance applications.
In such CCD imaging devices, either a linear array or two-dimensional X-Y array of photosensitive picture elements or pixels are utilized to produce an electronic charge pattern which is related to the imaged scene. The CCD device facilitates collection and processing of the electronic output signal. These devices are not practical for applications such as robotic vision because of the large number of pixels that must be processed to reduce the vast quantity of data to signals that can be used for the control of robotic movements or other decision making. Such limitations apply in general to systems which require real time or at least near real time signal processing.
In U.S. Pat. No. 4,331,889 a CCD focal plane array is described with signal enhancement had by increasing the integration time during which each detector generates a photo current. U.S. Pat. No. 4,275,315 teaches an optically scanned monolithic focal plane array with an on-chip aperture corrector which sums charge using a main CCD and a secondary CCD, with a summing node between them.
U.S. Pat. No. 4,187,000 teaches an optical computer for computing a variety of functions, including convolution in real time. This prior art patent offers a general theoretical description of convolver computation and function. U.S. Pat. No. 4,200,861 teaches a pattern recognition device with stored information compared in real time with field-of-view information, and convolved to generate a correlation number indicating the percentage of match.
U.S. Pat. No. 4,298,887 teaches a multi-element starring infrared imaging system with non-uniformity correction had using a convolution integral to correct for non-uniform pixel element response. U.S. Pat. No. 4,301,471 describes a moving target indicator system using CCD signal processing. The processing of signals from CCD's is described in U.S. Pat. No. Re. 30,087; U.S. Pat. Nos. 4,035,629; and 4,079,238, and these signal processing techniques referred to as correlated double sampling, and extended correlated double sampling are incorporated herein by reference and are usable with the present invention.
In order to simplify and speed up the processing and interpretation of optical image signals, techniques have been developed in the prior art for processing the image data to extract the location of expected object features. Special preprocessing hardware has been used to convert the grey level imagery to a black and white binary image. The technique of thresholding has been used so that objects of interest become silhouettes on a bright background, and elementary edge extraction provides the silhouette of object edges on a dark background. In real world environments, such binary imaging is not adequate for reliable classification and interpretation of the scene and multi-feature grey level image data is desired.
In pattern recognition for robotics and other applications, image data (pixel values) must be processed to extract only a few values useful to direct the mechanical motion of the robot, or to describe the viewed environment. In image interpretation algorithms, primitive image operations include the performing of linear convolutions, and the combination of these convolutions in nonlinear ways for extracting rotational invariant features, such as edges, lines, corners, line-ends, points, texture, curved edges, etc. Vectors of these rotationally invariant features are used to classify regions for the image, and to extract the reduced data set required for directing the robot activities or for interpreting the viewed scene.
The major computational bottleneck in such image interpretation is in the convolution operation required to extract rotationally invariant features. A typical convolution might involve the multiplication and addition of a 7.times.7 array of coefficients by image data, centered at all possible pixel locations. In some microprocessors, a a multiplication requires about 250 microseconds. If five separate convolutions are extracted over a 256.times.256 pixel image region, the processing would take over an hour. It is, therefore, necessary that the convolution be performed more rapidly in the order of milliseconds or microseconds on the focal plane of the imaging CCD.
In many vision system applications, linear convolution image operators are required for extracting signatures needed in recognizing patterns. A convolution in a linear combination of neighboring pixel values centered about some point on the focal plane at a given time. The neighboring pixels can be separated either by space or in time. The linear combination is expressed as: ##EQU1## where i, j are the coordinates of the centering point on the focal plane at some time, t.
In order to accommodate both positive and negative convolution coefficients. .alpha..sub.lm;k, the focal plane convolver of the present invention performs two separate convolutions by associating all of the terms of like signs together. EQU C.sub.i,j (t)=C.sub.i,j .sup.(+) (t)-C.sub.i,j.sup.(-) (t)
Both C.sup.(+) and C.sup.(-) have positive coefficients and can be physically realized by collecting photo-generated electrons from appropriate pixel sites with integration time proportional to the magnitude of the convolution coefficient.
The CCD focal plane convolver of the present invention collects these electrons in two paired charge packets corresponding to C.sup.(+) and C.sup.(-) centered about every pixel (i,j) and performs the difference, C.sup.(+) -C.sup.(-) on the chip in a noise compensating manner using the double correlated double sampling processing described in the aforementioned U.S. Pat. No. Re. 30,887; U.S. Pat. Nos. 4,035,629; and 4,079,238.
Some aspects and applications of the present invention are described in "CCD Focal Plane Convolver (Smart Eyeball)", by Dr. Paul R. Beaudet, published Mar. 20, 1985 in Machine Vision Technical Digest, and the teachings of this paper are incorporated by reference herein.