Search engines comprise the prevailing implements for accessing information in a controlled manner. Popular search engines, such as the one provided by BING™ (BING is a trademark of Microsoft Corporation), provide an infrastructure that supports millions of inquiries on a daily basis. It is well known that search engines typically employ one or more programs (known as “crawlers” or “spiders”) that automatically collect web resources, including but not limited to, web pages, images, videos, audio files, Word documents, PDFs, etc. Dynamic crawlers can often be employed to follow entities and provide updated data on such entities. Copies of all retrieved pages are created by the search engine, which will index the downloaded pages to provide fast searches. Since most web pages contain objects (such as links) to other web pages, a crawler can start almost anywhere and can repeatedly follow the links found from a central page to index new resources.
A problem with the design of conventional search engines is that the focus of the search is placed on the location of the information as a destination. Accessing information about such entities is a process that is not always intuitive. As shown in FIG. 1, distinct destinations are organized currently around topics or “entities” 10 that may appear on one or more pages 12 organized on a website 14 or in any other web-supported medium. Currently, web users must proactively seek the relevant feeds or dynamic content related to entities of interest and rely upon the search engine's ranking of the top matching web pages to retrieve the relevant data. Upon identifying entities by searching, a web user, in order to follow such entities and receive desired updates, must designate the entity as being of interest. The web user can elect to receive updated entity content by subscribing to an RSS (Really Simple Syndication) feed (or to receive delivery of updated dynamic content). The web user can take the extra step of creating alerts (via e-mail, SMS, video feed, audio signal, or the like) that are delivered on a specific temporal basis (monthly, weekly, daily, or immediately as updates occur).
Conventional entity following does not address the manner in which the vast majority of people use search engines. As Internet technologies rapidly restructure methods of content distribution, and as the web-based knowledge stream becomes increasingly digitized, it is desirable to translate the web content resources into a function that more closely replicates actual user logic and intuition. While conventional RSS and alert systems provide some similar features, they do not completely fulfill the web user's need to follow designated entities in real time. RSS limits the web user to follow a single data source about an entity rather than multiple sources (for instance, following stories on Brooklyn from the New York Times rather than extensive web content directed to information about Brooklyn). Any associated alerting system that allows web users to receive alerts on the desired entity is based on a data-sourced subscription rather than the entity itself.
It is therefore desirable to employ user experience (UX) design to overcome such limitations by making the web user experience part of the design process. Specifically, as information spaces become more niche, it is desirable to provide over-arching architecture that enables web users to surface entities in the web context, regardless of where the context resides, and contemporaneously associate an alert function with updates in entity content.