Data fusion refers to the process of associating or corresponding heterogeneous sensory data, in this instace three dimensional data due a known active optical triangulation based sensor, S3, and an independent TV camera, S2. While S3 provides unambigous three dimensional coordinates of the underlying surface, S2 provides the intensity data associated with such a scene.
Data fusion can, however, be effected at different levels of abstraction. Conventionally, when fusion is effected at pixel level for range and intensity imagery, the fused region is confined to specific spatial structures -e.g. landmarks, edges, curves- that can be detected in both sensory spaces. Hence such a fusion is constrained by the scene context.
The process of data fusion, as intended here, enables to attach intensity values to every surface point imaged through S3 and S2 and vice versa. When S2 is a color camera, the points obtained through S3 can be attached to suitable color coordinates.
It is an objective of this ivnention to effect such data fusion even for "featureless" scenes or surfaces. It is a further objective of this invention to effect such data fusion from sub pixel to coarser resolutions as well as at higher levels of abstraction such as segmented images or classified targets.
Data fusion as described herein can prove valuable in feature selection and hence pattern classification in general and image segmentation and object classification in particular, as fused data expand the domain in which uncorrelated features may be sought. The field of robotic vision and applications such as automated assembly and industrial inspection therein can greatly benefit from such enriched sensory data. The invention can also greatly benefit rendering of range imagery when S2 is a color camera.
Data fusion as described herein is centered around a self scanned three dimensional sensor, S3, and a calibration process. A self scanned sensor can effect a controlled relative motion between its projected laser stripe and the TV camera, S2, that may even be attached to S3's enclosure. The calibration process yields a "data fusion table" that connects the resolution cells in the S2 space to rays whose equations are established in the S3 space.
When a scene is imaged through a set of multiple 3D sensory sources the problem of adjacency of the resulting data can prove difficult. Data fusion tables may equally be created for an arrangement of two 3D sensors and a an independent TV camera and facilitate the solution.
It is also shown how to effect data fusion for single-plane-of-light sensors whose projected light stripe moves in tandem with S2.
Throughout this disclosure the illuminant of S3 is assumed to be a laser source, radiating a beam of light which is optically converted into a plane that in turn projects a light stripe onto the underlying surface. The said data fusion is, however, equally applicable when S3 is a flying spot based three dimensional sensor.