The Internet has made numerous forms of content available to users across the world. For example, consumers access the Internet to view articles, research topics of interest, watch videos, and the like. In particular, online viewing of digital media has become extremely popular in recent years.
Yet further, the growing prominence and value of digital media, including the libraries of full-featured films, digital shorts, television series and programs, news programs, and similar professionally (and amateur) made multimedia (previously and hereinafter referred to generally as “digital media,” “multimedia digital assets,” “multimedia,” “videos,” “assets,” “files,” “content,” or any combination of such terms), requires effective and convenient manners of navigating, searching, and retrieving such digital media.
Sometimes when searching, users are willing just to browse through broad categories or genres of videos until they find an asset of potential interest. However, sorting through all available videos to find content that may be of greater interest to a specific user is becoming more and more unwieldy and difficult—especially as the number of content sources and the sheer amount of available content increases. Going forward, users are more likely to want to browse through a more targeted or selected subset of available videos, TV shows, movies, or other multimedia assets. Thus, being able to generate and present a targeted subset of assets that are more likely to be of greater interest to a user or group of users is desirable.
“Metadata,” which is a term that is used herein, is merely information about other information—in this case, information about digital media. For example, metadata may just refer to keywords or terms that identify basic information or characteristics associated with a particular digital media asset, including but not limited to actors appearing, characters appearing, dialog, subject matter, genre, setting, location, or themes presented in the particular digital media asset. Metadata may be related to the digital media asset as a whole (such as the title, date of creation, director, producer, production studio, MPAA or similar content rating, etc.) or may only be relevant to particular scenes, images, audio, or other portions of the digital media asset. Metadata related to digital media assets as a whole are commonly available. However, rich or time-based metadata that is associated with discrete portions or segments of digital media assets are not as readily available, but provide significant value and enable much more accurate searching and analysis of video content.
Typical video search engines, such as the Internet Movie Database (IMDb®), rely upon basic keywords or terms that are related to the digital media asset as a whole. If a term or keyword has been associated with a video, then that video will be returned as a possibly-relevant response to a user search query that contains matching terms or keywords—this is a basic “semantic” type of search that presents the result of information retrieval based on metadata. Ranking of the videos returned in response to the search query can be determined in many different ways: by date, alphabetically by title, by greater number of matches between the search terms and the keywords, by the order such keywords are arranged (presumably by order of importance) in the metadata, or if such keywords are in more important data fields of the metadata (e.g., title, actor name, director, writer, etc.), as opposed to less important data fields (e.g., plot synopsis or summary of the video).
Some Internet video search engines, such a Hulu® or Netflix® or YouTube® or Amazon®, allow a user to search for a video by term or keyword and then, once a particular video has been selected by a user, such search engines present a handful of other videos that may also be of interest to the user based on viewing or purchasing habits of users who also previously selected the same particular video—this is one type of “latent feature” search that presents results based on user interaction, user activity information, or collaborative filtering (such as, in a non-limiting example, user ratings).
There remains a need, however, for more powerful and intelligent search capabilities that take advantage of rich metadata and latent feature information associated with available digital media assets to generate and return ranked search results that are highly relevant based on a user's search query or selection of one or more video assets that may then be used as a reference point or target for the type of video assets that are of likely interest to the user or group of users.
The present inventions, as described and shown in greater detail hereinafter, address and teach one or more of the above-referenced capabilities, needs, and features that would be useful for a variety of businesses and industries as described, taught, and suggested herein in greater detail.