Field
Embodiments of the present invention generally relate to data correlation systems and, more particularly, to a method and apparatus for correlating and viewing disparate data.
Description of the Related Art
John Naisbitt's famous words often seem truer in today's world than ever before: “We are drowning in information, but starved for knowledge.” Increasingly, there are many different, widely available sources of data such as social networks, news sites and newsfeeds, blogs, webcams, and a wide variety of other private and public sources for diverse types of data including photos, videos, and textual content. This creates a growing need for better, more coherent ways to correlate, and to derive semantic information from, the multiple multi-modal sources of information, and to view and navigate all of this data in an organized and meaningful way. Conventional search engines and information retrieval systems, however, are often weak at synthesizing data from multiple sources and channels over multiple modalities that needs to be correlated and “aligned” along multiple dimensions such as geo-space, time, with other entities, events and their semantics.
Current research on cross-modal association tends to rely on an underlying assumption that the different modalities have strongly correlated temporal alignment, which is not always the case. The “Semantic Web” (see www.w3.org/2001/sw) is an example of a technological approach to enable derivation of meaning and associations from web-based content that has been manually semantically “tagged.” However, much of the data that is available and continues to be published on the Internet is not semantically tagged at present. Geo-location, for example, can potentially be an important cue in cross-modality association. However, much of the image and video content available on today's Internet may not include location metadata, much less precise geo-location and orientation coordinates, and so it cannot readily be correlated and reasoned about with regard to its geographical location, for example. Broadly speaking, cross-modality association is difficult in part because it entails interpreting signals at a semantic level in order to make correlations, and there remain significant technological challenges in solving the problem of correlating cross-modal data to produce meaningful inferences.
Additionally, existing methods of creating cross-modal associations do not harness the local, timely, “everywhere” nature of open media (social media, including FACEBOOK, TWITTER, INSTAGRAM and the like) to produce intelligence such as prediction, planning and response related to events.
Therefore, there is a need in the art for a method and apparatus for aligning, correlating and viewing disparate and/or unsynchronized data along multiple dimensions (geo-space, time, entities, events and their semantics) in order to produce meaningful inferences and responses to queries, based on cross-modal and multi-modal data streams.