This section is intended to provide a background or context to the invention disclosed below. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived, implemented, or described. Therefore, unless otherwise explicitly indicated herein, what is described in this section is not prior art to the description in this application, and is not admitted to be prior art by inclusion in this section. Abbreviations that may be found in the specification and/or the drawing figures may be defined below at the end of the specification, but prior to the claims.
The present invention relates generally to the broad area of sensor fusion, in which sensory data from disparate sources are combined to improve the resulting data quality, usually in terms of accuracy and robustness.
Sensor fusion can be roughly divided into two categories: multi-sample fusion and multi-modal fusion. Multi-sample fusion takes advantage of redundancy in input data to significantly reduce the noise in individual sensor data, generating much cleaner output. Multi-modal fusion takes advantage of the often complimentary nature of different sensing modalities, for example, the ability of photometric stereo to capture detail with the metric reconstruction from stereo to reduce systematic errors in the fused data.
Unlike these pre-existing approaches to sensor fusion, the present invention is directed toward improving the frame rate of depth sensing through the use of a high-speed video camera.