Field of the Invention
Embodiments of the present invention relate, in general, to graphical rendition of data and more particularly to systems and methods for providing interactive displays of multi-modal data.
Relevant Background
Sensor fusion involves a wide spectrum of interest. These interests range from sensor hardware and data acquisition to analog and digital processing of data to symbolic analysis and rendition. And in each instance fusion of data typically operates in a framework directed to resolve some class of problem.
Sensor fusion combines sensory data or data derived from sensory data from disparate sources such that, ideally, the result gained from the combined information is better than would be possible when these sources are used individually. The term better in this case can mean more accurate, more complete, or more dependable, or refers to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).
The data sources for a fusion process are not specified to originate from identical sensors or from even “sensors” of a particular class. As would be known to one skilled in the art of sensor fusion, direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.
Information integration, a concept closely related to sensor fusion, is the merging of information from disparate sources with differing conceptual, contextual and typographical representations. It is used in operations such as data mining and consolidation of data from unstructured or semi-structured resources. Typically, information integration refers to textual representations of knowledge but is sometimes applied to rich-media content. Information fusion, which is a related term involves the combination of information into a new set of data towards the goal of reducing uncertainty. An example of technologies available to integrate information include string metrics which allow the detection of similar text in different data sources by fuzzy matching.
Work on sensor fusion and information integration over the past two decades has attempted to fuse large amounts of disparate raw data into an information-rich, “system-of-systems” data display. The displays of the prior art is generally characterized by being dependent on GPS, possessing some for more of terrain data, and possess high bandwidth data communication. Unfortunately, recipients of such systems consistently report being overwhelmed with data, find it difficult to correlate the information spatially or temporally into actionable insights, or do not receive vital data of local relevance.
The continuous receipt and transmission of detailed raw data often requires high bandwidth communications systems and yet yields marginal returns. Such approach imposes a high workload on users or analysts to extract locally relevant insights across even a few modalities to geographically separated users. Moreover, current approaches to real-time information gathering and dissemination are not linked egocentrically to user movement and local changes. There remains a need to render multi-modal data in such a way as to highlight view point, motion and relative changes and to process that data so as to recognize events rather than objects or images. One or more features of the present invention address these and other deficiencies of the prior art.