Field of the Invention
The present invention generally relates to computer science and, more specifically, to techniques for processing and viewing video events using event metadata.
Description of the Related Art
In recent years, the trend of storing large collections of online videos has exploded. As a result, there has been a recent surge in developing ways to allow users to efficiently locate and navigate to scenes of interest (referred to as “events”) within videos. However, finding specific scenes or events of interest within a large collection of videos remains an open challenge. For example, consider a baseball fan who wishes to watch all home runs hit by their favorite player during a baseball season. Even if the user manages to create a playlist of all videos (games) where the events of interest (home runs) occurred, it would still be time consuming to watch the entire playlist of videos to view the events of interest within these videos. A current approach to identifying events within videos is to manually view, identify, and record information for relevant events within each video. This manual method of identifying events is a time-consuming and error-prone process. Thus, there is a need for a more efficient technique for identifying relevant events within videos.
Once relevant events within a video are identified and metadata recorded for these events, a user interface is provided that typically allows users to search and view the events. One current approach is to provide metadata search and exploration in the user interface using single attributes (one dimension search). Another approach is to provide search and playback of events in the user interface spanning one video at a time per search. Current user interfaces, however, do not fully leverage the event metadata to allow effective search and playback of events using multiple attributes across multiple videos. Thus, there is also a need for a more effective technique for searching and playing relevant events within videos.