The World Wide Web provides a large collection of interconnected content in the form of electronic documents, images and other media content. Over the years the web has grown to an immense size and contains webpages and other content on just about every subject a person could think of. As a result of its growth to such an immense size, locating content on the web has become of primary concern with the result that there are numerous search services now available.
Many of these search services take the form of a search engine, where a user can input a search query in the form of one or more search terms with connectors placed in between the terms. The search engine then takes the search query and attempts to match it to webpages on the web that have been indexed by the search engine. By matching the search query to a number of different webpages, the search engine generates a list of search results and returns the list of search results to the user. Each search result in the list typically includes a link that a user can use to access the located webpage or other located electronic document.
The search engines typically locate what they consider to be “relevant” webpages by using specially created indexes and/or databases where the relevancy of a document identified in an index or database is based on terms from the search query being present. The located documents are then further ranked so that the “best” results appear higher in the list of search results and the “poorer” results appear closer to the bottom or end of the list of search results.
Additionally, it has become a fairly big business to consult on website design in order to use tricks and loopholes in the more common algorithms used by search engines to have a webpage ranked higher in search results than another webpage which might be as good qualitatively if not better than the higher ranked webpage.
The ranking of the located search results is typically done using algorithms that often base the ranking on how closely the search query matches the located webpages (usually on how the webpage is described in the search engine's index or database) and other criteria. In some cases, because the search engines only receive a search query containing search terms, the ranking of the webpages located by a search engine can be heavily based on the occurrence of the search terms in the index or database identifying the webpage, however, other factors can also be taken into account, such as whether the domain name matches the search query or whether a webpage is a sponsored link that has paid the search engine to be ranked higher.
While many of these algorithms may be good at ranking located webpages by the criteria of “relevancy” used by the algorithms, this ranking is based on objective factors. They are typically unable to determine which of the located webpages may be qualitatively “better” than other located webpages, which is often a subjective quality assessment that cannot be assessed on a purely objective basis. By relying on objectively defined parameters such as the number of times a search term appears on a webpage or whether the domain name contains one or more of the terms in the search query these algorithms fail to provide rankings of the located webpages based on the subjective quality of a webpage. Often, even though a webpage may use commonly used search terms and therefore typically rank quite highly in a list of search results, the overall quality of the webpage may not be that high or as good as another site that does not use the search terms as frequently.
While many search engines do not even attempt to address how qualitatively good search results may be, some search engines do use algorithms that attempt to determine which search results are qualitatively “better” than other search results. One example of this is the algorithm disclosed by U.S. Pat. No. 6,285,999 to Page, that uses the number of links between webpages to try to assess the quality of a webpage. The algorithm is based on the underlying theory that websites that are linked to by a relatively large number of other unrelated websites are more likely to be qualitatively “better” than websites that have few other websites linking to it. Even in trying to determine how subjectively “good” a website might be, this algorithm is still limited to using objectively measurable factors (in this case the amount of links) to attempt to approximate how subjectively “good” a webpage may be.
However, while the conventional search engines may not be able to produce subjectively ranked results, conventional search engines such as Google™, Yahoo! Search™, Excite™, etc., have massive resources invested in their infrastructures and equipment including the compilation of massive indexes of webpages and other electronic documents. Hours and hours of effort and incredible amounts of funding and research have went into the creation of these search engines and their search engine indexes. In addition, because of the ever changing nature of the Internet and in particular the World Wide Web these search indexes must be constantly maintained and updated. Operating and maintaining a popular search engine is a huge undertaking. A popular search engine might index hundreds of millions of webpages and respond to millions and millions of search queries a day. To start from scratch and create a search engine for the World Wide Web or other parts of the internet is a huge undertaking that will also have very savvy and well funded competition from other search engines. While conventional search engines may not allow much subjectivity to factor into their ranking of search results, their searches do encompass huge amounts of webpages that are compiled in massive search indexes.
There is a need to provide some type of subjective rating of the quality, popularity or other criteria for search results; a rating that reflects how “good” a webpage or other electronic document may be, while still being able to use the search indexes and massive searching ability of conventional search engines.