The Internet has vast amounts of information distributed over a multitude of computers, hence providing users with large amounts of information on various topics. This is also true for a number of other communication networks, such as intranets and extranets. Although large amounts of information may be available on the network, finding the desired information may not be easy or fast.
Search engines have been developed to address the problem of finding desired information on a network. A conventional search engine includes a crawler (also called a spider or bot) that visits an electronic document on a network, “reads” it, and then follows links to other electronic documents within a website. The crawler returns to the website on a regular basis to look for changes. An index, which is another part of the search engine, stores information regarding the electronic documents that the crawler finds. In response to one or more user-specified search terms, the search engine returns a list of network locations (e.g., uniform resource locators (URLs)) that the search engine has determined include electronic documents relating to the user-specified search terms. Some search engines provide categories of information (e.g., news, web, images, etc.) and categories within those categories for selection by the user, who can thus focus on an area of interest from these categories.
Search engine software generally ranks the electronic documents that fulfill a submitted search request in accordance with their perceived relevance, and provides a means for displaying search results to the user according to their rank. A typical relevance ranking is a relative estimate of the likelihood that an electronic document at a given network location is related to the user-specified search terms in comparison to other electronic documents. For example, a conventional search engine may provide a relevance ranking based on the number of times a particular search term appears in an electronic document, its placement in the electronic document (e.g., a term appearing in the title is often deemed more important than the term appearing at the end of the electronic document). Link analysis, anchor-text analysis, web page structure analysis, the use of a key term listing, and the URL text are other known techniques for ranking web pages and other hyperlinked documents.
Currently available search engines are generally limited to displaying search results according to the perceived rank. Unfortunately, this may provide insufficient information to the user because the highest ranking results may all fall within a single category of information. For example, the names of many products have more than one meaning (automobiles are named after planets, personal computers are named after fruit, etc.). The value of the first page search results to the user may depend on whether the user is interested in information on, for instance, the planet Saturn or on an automobile of the same name. As a result, it is often necessary for users to refine a query or read several pages of search results because too many of the displayed results on a first page relate to a single topic or category.
Thus, the need exists for a search engine that displays search results related to various topics or categories on a single page of search results independent of conventional rankings. By displaying such dispersed search results, the user is able to view a variety of results on the first page of results.