Current Internet search tools often provide irrelevant data sites or web sites. Often, current search tools provide a score of relevance according to text frequency within a given data site or web page. For example, "termites" and "Tasmania" and "not apples":
If a web page has several instances of the word "termites" (600 for example), the web page would receive a high relevance score. PA1 A web page with 600 "termites" and one "Tasmania" would receive a slightly higher score. PA1 A web page with the above plus "apples" would then receive a slightly lesser score.
Therefore, a score of relevance according to a data site or web page is often based on text or word frequency. Therefore current search tools often provide a list of irrelevant web pages. Furthermore, there is the opportunity for abuse in and associated with the method of the available search tools. Current search tools often provide links that are stale (old data that is no longer at the address of the data site). Existing search tools utilize indices that are compiled in the background continuously. However, with respect to an individual query, a historical result is received. Therefore, the search process involves a large amount of filtering by the individual user.
Therefore, there is a need to more efficiently utilize search tools to overcome irrelevant results. At present, it is desirable to have an efficient method for performing a search which would take into account demographic as well as historical user information to filter irrelevant data from the results from existing search tools.
Furthermore, it is desirable to have a search engine which will evaluate and filter stale data responses from an existing search tool response.