The Internet is a publicly accessible worldwide network of other interconnected computer networks. It consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.
The Internet and the World Wide Web (Web) are not synonymous. While the Internet is a collection of interconnected computer networks, linked by any communicative means, such as copper wires, fiber-optic cables, wireless connections, etc., the Web is a collection of interconnected documents and other resources, linked by hyperlinks and Uniform Resource Locators (URLs). The Web is accessible via the Internet, as are many other services including e-mail, file sharing, and others.
The Web is accessed by navigating to any of a vast amount of “pages,” which are each located at a unique address. Each of these pages is able to contain “content,” such as graphics, text, video, and sound. Programmers control what content appears on each of these pages. In addition, each page is able to link to other pages through hyperlinks. These other pages are identified by URLs embedded in the hyperlink and contain further content. Due in part to the ease in Web page programming, the Web has experienced a steep exponential increase in the number of pages and, correspondingly, the amount of content available via the Internet.
Compared to traditional information sources, such as encyclopedias and libraries, the World Wide Web has enabled a rapid decentralization of information and data. To help locate this data, “search engines” have been developed by a myriad of software developers. “Search engines” are well known document retrieval systems used to locate information stored on the Web. Through keyword-driven Internet search engines, like GOOGLE®, YAHOO®, ASKJEEVES®, and many others, millions worldwide now have instant access to a vast and diverse amount of online information.
Known search engines work by accepting a user input keyword or keywords with which it uses to perform a comparison to content on Web pages. The comparison can be a basic direct comparison, a complex algorithm, or somewhere in between. Once a specified number of pages are searched, the results are ranked in some order of “relevance,” which is a term that has been given several definitions by those that rank information. The results are then displayed in a list, with the determined most “relevant” page being at the top of the list and the least relevant at the bottom.
Unfortunately, determining relevancy is not an exact science. Many search engines define the most relevant site as a site where the keyword appears most frequently. Others determine relevance by the number of other users that select that page when presented with a list of potentially relevant pages. Many other methods are used to attempt to find a page that will most closely fit what the searcher is looking for. Some search engines simply place the page that pays the most money to them as the top choice for the searcher.
However, these relevancy-determining methods are inefficient and inaccurate. For instance, suppose a user is seeking a mutual fund and types in the key words “mutual” and “fund.” With all prior-art search engines, the user will not be shown a list of mutual funds with their associated details, but will instead be presented with a list of pages that simply have the words “mutual” and “fund” on their page or in their associated metadata. Similarly, if a user is seeking a credit card, that user, using existing search engines, cannot be given a list of credit cards ranked by their attributes, but will instead be shown a huge list of credit card related sites, which may include credit card issuing companies, credit card customizing companies, stores that accept credit cards, and many others, that are presented to the user based on possibly irrelevant criteria, such as which one pays the highest per click fee or which one generates the most web traffic.
There is currently no way for a searcher to know, out of a list of usually thousands of located Web pages, which page the searcher is seeking. Having to click on each of the non-strategically and inaccurately ranked Web pages located after a search, in order to determine which one is the most relevant, is tedious for the searcher and creates a great deal of frustration.
Therefore a need exists to overcome the problems with the prior art as discussed above.