In the art of robotics, it is becoming increasingly concluded that the robot should "see" an object and, in response thereto, be able to perform some predetermined (and perhaps adaptive) task. The robot may "see" the object by use of a television camera, which provides electrical signals representing the object to an image processor controller arrangement. One known vision arrangement is disclosed in the publication by G. J. Gleason et al, "A Modular Vision System for Sensor-Controlled Manipulation and Inspection," 9th International Symposium on Industrial Robots (Mar. 13-15, 1979), pp. 57-70. There a camera provides an object picture having a 128.times.128 array of picture elements (also called pixels or pels in the art). The TV camera pel signals are coupled through an interface preprocessor to a digital computer. The computer includes memory for storing vision library programs as well as application programs. The programs are designed to analyze the object picture by segmenting the picture into contiguous regions of one color (such as black or white), which regions are called "blobs". Associated with each blob is an array of data, called a blob descriptor. Each blob descriptor includes information which characterizes features of the blob, such as its geometrical area, which features, in turn, are useful for identifying the specific object "seen" by the TV camera.
One set of features is related to the algebraic moments of the area of a blob. Moments are mathematically useful for determining the geometrical center of the blob and are commonly of a form such as .SIGMA.x, .SIGMA.y, .SIGMA.x.sup.2, and .SIGMA.xy, and .SIGMA.y.sup.2 or even higher powers where the symbol .SIGMA. means summation. Unfortunately known moment generators typically suffer from time inefficiencies related to multiplicative operations.