Conventional search engines generally use web crawlers and robots to obtain information. After collecting the information, the search engine will generally aggregate the information based on keyword indexing. The mechanisms for aggregating information used by conventional search engines are often based on the wording in the information being aggregated.
Once the information has been indexed, conventional search engines formulate contextual search paradigms based on the indices, and provide users with search results. Upon receiving search engine results, users sift through the results to determine the relevance of the results.
A multi-user environment, such as an enterprise environment, poses several issues with respect to conventional search engines. For example, conventional search engines do not offer the ability to control how information is aggregated and obtained. Additionally, web crawlers and robots merely traverse all the documents on a server without evaluating the context of the information as it is collected.
In a multi-user environment, a user may need to know only certain information that falls within a particular context, and an organization may also limit access to certain information to a subset of users.