The vast volume of information in the Web and the ever-increasing popularity of using the Internet as a source of information, led to a common scenario that in response to a query fed to a search engine, say Yahoo, numerous results are obtained. Thus, for example, a user who wishes to learn the Markup Language XML syntax and places the words XML and tutorial in Yahoo is likely to receive hundreds if not more of Web page results which meet this query. It is, therefore, only natural that the results will be ranked such that the most relevant ones will be displayed at the top of the result list, thus reducing the exhaustive effort involved in reviewing the list of results. To this end, there are numerous known techniques which involve various kinds of text analysis, e.g. how many times the sought word appears in the text of the Web page. The more instances of the word, the higher is the score assigned to the page. In the case of more than one search words, the resulting Web page may be e.g. tested in order to ascertain, whether the sought words are close to each other (higher score),or otherwise, are not in the same sentence or same paragraph (lower score). The latter are only simplified examples and there are known in the art other more sophisticated techniques for scoring document results by using text-related (keyword match) analysis.
Ranking documents exclusively or mainly on the basis of text analysis technologies has an inherent shortcoming in that pages that are of low or no “importance” may nevertheless be assigned with higher rank than those considered by the user as more important. For a better understanding consider the latter example of XML tutorial and further assume that IBM published and placed on the Web a user manual that includes XML tutorial chapter and that this document is successfully visited by many users and accordingly it is included in many “recommended link” lists in the Web. Naturally, the user who placed the query would expect to receive a high ranking score for this page (as compared to other pages that result from the same query) which would lead to incorporation of the link to this IBM page at the top part of the query result list. The text-based (keyword match) approaches specified above may fall short in providing this desired result insofar as the user is concerned. Thus, assume that there is another Web page (other than the specified IBM's) which contains an opinion of an unknown person on the XML and includes in one of its paragraphs a few appearances of the combination XML tutorial. This page has no practical relevance for learning XML (probably does not include actual tutorial) and is hardly visited and has almost no links thereto. However, due to the repetitive appearance of the combination “XML tutorial” in the page (as compared to, say, a single occurrence of this combination in the IBM page, i.e. only in the title), a text-based ranking approach may assign higher score to the article page as compared to the IBM page, which insofar as the user is concerned, is counter productive. Moreover, if there are many pages which are also unduly assigned with a higher score, the IBM page (which is the real page of interest insofar as the user is concerned) may be pushed down in the score list requiring the user to scroll through few screen results before arriving to the IBM result. Some users may even abandon reviewing the result list before getting to the appropriate link. Accordingly, insofar as the user is concerned, using the specified keyword match approaches may not only cause an undue delay (until the sought result is found) but may sometimes result in complete waste of time.
This significant drawback has been noticed. Accordingly, in Jon M. Kleinberg, “Authoritative Sources in a Hyperlinked Environment”, Proc. of ACM-SIAM Symposium on Discrete Algorithms, 1998, (also appears as IBM Research Report RJ 10076, May 1997 and U.S. Pat. Nos. 6,112,202 and 6,112,203) an algorithm is described for ranking a set of documents based on link structure. The algorithm associates to each page an authority weight and a hub weight.
A good hub is a page that points to many good authorities; a good authority is a page that is pointed to by many good hubs.
The algorithm computes pages hub and authority scores through an iterative algorithm, working on an in-memory representation of the link graph of the pages.
Intuitively, “authority weight” indicates a so-called importance of the page. The importance of a page takes into account how many links point to the document and how “important”these documents are. The more important is the page the higher are the scores. Reverting to the latter scenario, the IBM page would get much higher scores than the article page which has only few links pointing thereto non of which being really important. This notwithstanding, the technique according to Kleinberg has a few significant shortcomings. It is designed to run on a focused and small set of pages, and not on the whole web. The algorithm operates on a relatively small number of pages, and is not scalable to the size of the web.
The reasons for this are the fact that the algorithm needs to store the link graph in main memory, and that the computations involved are relatively expensive. Moreover, it is an offline algorithm, i.e. works on a given and fixed graph, and cannot be executed while discovering the graph.
Another proposed solution which is more adequate for larger scale applications such as the Internet is described in:                S. Brin, L. Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, Proc. 7th International World Wide Web Conference, 1998.        Sergey Brin, Lawrence Page, R. Morwani, Winograd, “The PageRank Citation Ranking: Bringing Order to the Web”, Stanford Digital Library Working Paper 1998 SIDL—WP-1999-0120 (published 1998).        Taher H. Haveliwala, “Efficient Computation of PageRank”, SIGMOD Stanford Technical Report, 1999.        
The algorithm uses the links between web pages (i.e., the link graph) to associate to every page a value called PageRank, which indicates the authority or importance of the page.
The idea is that a page is important (high ran) if there are many pages or important pages linking to it.
The algorithm works by repeatedly multiplying a matrix describing the link graph with the vector of rank values. There are efficient ways to implement the algorithm that store the link graph on disk, thus allowing for computing the rank for large graphs (millions of pages). Nevertheless, the method requires the explicit storage of the link graph, thus requiring large amounts of storage. Also the computation is expensive for large number of pages, and it can take days or weeks to compute the ranks.
In the Google search engine introductory part (www.Google.com/technology/index.html) it is specified that “The heart of our software is PageRank(™), a system for ranking web pages developed by our founders Larry Page and Sergey Brin at Stanford University. And while we have dozens of engineers working to improve every aspect of Google on a daily basis, PageRank continues to provide the basis for all of our web search tools.
PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.”
Important, high-quality sites receive a higher PageRank, which Google remembers each time it conducts a search. Of course, important pages mean nothing to you if they don't match your query. So, Google combines PageRank with sophisticated text-matching techniques to find pages that are both important and relevant to your search . . . ”
It appears, therefore, that Google utilizes the “page importance ranking” discussed above.
This approach also suffers from significant drawbacks, as follows:
It needs to explicitly store the link graph. It takes a large amount of time and resources to compute ranks, and it is an offline algorithm.
Thus, the hitherto known techniques need to construct the link graph, for example, by storing, for each page, the list of pages it points to. Once this information is obtained and stored for all pages, the algorithm can be started. To compute the importance of a large number of pages, the algorithm will need a large amount of resources and time. When the computation is finished, there are available the results that correspond to the so stored graph. The problem is that this graph is already not be accurate anymore (as the web is dynamic, i.e. constantly changing), so it is required to run repeatedly the algorithm on the new graph.
There is, accordingly, a need in the art for substantially reducing the drawbacks of hitherto known techniques for calculating importance score of pages in the Web.
There is a further need in the art to combine the importance score of the invention with other ranking techniques such as hitherto known keyword-match techniques, so as to obtain a combined ranking scores of the Web pages.