1. Field of the Invention
This invention pertains to conformal, three-dimensional, volumetric data processing, as for computer vision, in particular as implemented on a parallel or systolic data processor comprised of individual bit serial processors.
2. Background Art
Prior attempts at processing and storing three-dimensional volumetric data have been severely limited for several reasons. Among these is the loss of information at the image-capture level (2-D image storage) with attendant propagation of such losses during further processing. Efforts to avoid this limitation have traditionally involved the introduction of additional system overhead in the form of increasingly complex hardware or software or both. Such efforts have been hindered, however, by a lack of synergism between hardware and software.
Contemporary three-dimensional volumetric methods have also dealt with a viewpoint independent object-centered representation of an "object space" whereas two-dimensional methods are normally viewer-centered such that objects in the image are represented in an "image-space". See, for example, R.T. Chin et al, "Model Based Recognition in Robot Vision", ACM Computing Surveys, Vol. 18, No. 1 (March 1986). Transition from two to three dimensions can be computationally intensive, a drawback which can be compounded by an essentially arbitrary choice for a frame of reference in each case.
To complicate matters even further, so-called "21/2-D" techniques are sometimes used to try to exploit the best features of both 2-D and 3-D representations by using a "surface-space" definition. The resultant necessity of "fusing" partial surfaces, which in some cases may be separated by missing portions, requires in turn computationally time consuming "tagging" of different frames of reference for the different surfaces having varying orientations in the surface-space and identification of the possible associations between the surfaces.
The net result of all of this is that image processing systems have been slow and highly memory intensive.
In a more specific environment, sensor data fusion based on SONAR (SOund Navigation And Ranging) and LIDAR (LIght Detection And Ranging) requires reconstruction of images in a conformal, three-dimensional manner.