1. Field of the Invention
The present invention generally relates to the information access over the World Wide Web (“WWW”), and to an improved method and apparatus for enabling off-line web access, i.e., while disconnected from the Internet.
2. Description of the Prior Art
The World Wide Web (WWW or Web) is a network application that employs the client/server model to deliver information on the Internet to users. A Web server disseminates information in the form of Web pages. Web clients and Web servers communicate with each other via the standard Hypertext Transfer Protocol (HTTP). A (Web) browser is a client program that requests a Web page from a Web server and graphically displays its contents. Each Web page is associated with a special identifier, called a Uniform Resource Locator (URL), that uniquely specifies its location. Most Web pages are written in a standard format called Hypertext Markup Language (HTML). An HTML document is simply a text file that is divided into blocks of text called elements. These elements may contain plain text, multimedia content such as images, sound, and video clips, and even other elements. An important type of element, called anchor elements, enables a Web page to embed hyperlinks, i.e., links to other Web pages. A Web browser typically displays hyperlinks in a distinctive format: as underlined text, or in a different color. When a user clicks a link, the browser brings up the page referenced by that link, even if it is on a different server. The Web page containing a hyperlink is referred to as the source document. The page referenced by a hyperlink is known as the target document.
A useful mode of web browsing is disconnected web access, otherwise known as offline browsing, which permits a user to view web pages while he/she is disconnected from the Internet. Disconnected web access is needed when there is no networking capability available at the location of a (mobile) computer, or when the user wants to avoid use of the network to reduce network charges and/or to extend battery life. It is also a viable fallback position when network characteristics degrade beyond usability. Disconnected web access works by storing (hoarding) necessary Web pages on the hard disk of the client computer prior to disconnection and, disconnected, servicing user requests for Web pages with the local copies. To maximize content availability, the user often needs to explicitly specify a set of Web pages that he is likely to access. Before going offline, these specified Web pages, called base pages, are downloaded to the client computer, along with some other pages that are reachable by following hyperlinks from the base pages. It is not sufficient to hoard base pages only because the user typically does not stop at a base page: while offline, he may request a page that is several clicks away from a base page.
Conceptually, a base page and all the pages that can be reached from it form a tree whose edges correspond to hyperlinks: the root is the base page, the second-level tree nodes are the pages one click away from the base page, the third-level nodes are the pages two clicks away from the base page, et al. The size of such a tree is often excessively large, due to the dense interconnection of Web pages. Hoarding all the pages in the tree would require a prohibitively long time and a disk space far beyond the local disk's capacity. Therefore, only a small subset of those pages may be hoarded. Existing systems, such as Microsoft's Internet Explorer, limit hoarded pages to those that are within a certain number of links from the base page. They are effectively based on a breadth-first approach, giving the pages at the same level equal consideration. However, a user's browsing behavior typically follows a depth-first pattern and not all links are of equal importance to the user. This implies that existing systems either waste significant time and space hoarding Web pages that are not needed by the user, or leave a lot of necessary pages unavailable to the user while offline. Some existing systems allow a user to refine the selection of pages based on a page's attributes such as its file type and whether it is on the same server (or in the same directory) as the base page. However, these options alone are not sufficient to limit the hoard volume. They must be combined with the hoard-by-level approach and therefore, do not ameriolate the problem. Since it is inconvenient or even impossible for a user to explicitly specify all the Web pages he will possibly access offline, a method is needed that hoards Web pages in anticipation of the user's future requirements so that the limited resources of time and disk space can be devoted to hoarding the Web pages that are most likely to be needed by the user offline.
As is known in the art, one can try to model a user's interests and/or to predict a user's future needs based on the user's past behavior. For example, by observing users' past Web usage, a system can build a data structure that reflects the interrelationship between URL references. The system is then able to speculate, given a URL reference (i.e., an access to a Web page), what other URLs are likely to be referenced in the near future. The system can further prefetch the corresponding Web pages before the user actually demands them, reducing user-perceived access latency. One such technique is described by V. N. Padmanabhan and J. C. Mogul in an article entitled Using Predictive Prefetching to Improve World Wide Web Latency, Computer Communications Review, 26(3):22-36, July 1996. They construct a dependency graph which has a node for every URL that has been referenced. Correlation between URLs are captured by edges through the nodes weighted by the likelihood that one will be referenced soon after the other. D. Duchamp discusses a similar technique in Prefetching Hyperlinks, Proceedings of Second USENIX Symposium on Internet Technologies and Systems, Pages 127-138, USENIX, Boulder, Colo. His system prefetches hyperlinks embedded in a Web page based on a usage profile that indicates how often those links have been previously accessed relative to the embedding page. These prefetching techniques are designed to improve Web access performance in a connected environment and are not suitable for Web hoarding which aims at optimizing data availability during disconnection. Specifically, they can predict only the pages that have been previously referenced, severely limiting the demand references that can benefit from the techniques. In order to make a substantial number of useful predictions, they often rely on observing a plurality of users, as opposed to a single user. That potentially increases the number of false predictions at the same time and the wasteful consumption of precious resources thereof.
Instead of using URL references to model user behavior, an alternative is to observe the document content seen by users. Appropriately aggregating the content of the Web documents that a user has browsed over time will give a pretty accurate indication of the user's interests. Such learned model of user interests can be used for assisting the user browsing the Web, suggesting hyperlinks that are potentially interesting to the user. Two systems of this kind are described by H. Lieberman in Letizia: An Agent That Assists Web Browsing, Proceedings of International Joint Conference on Artificial Intelligence, Montreal, Canada, August 1995, and by D. Mladenic in Personal WebWatcher: Design and Implementation, Technical Report IJS-DP-7472, Department for Intelligent Systems, J. Stefan Institute, Slovenia, respectively. Again these systems are targeted at a connected environment and have no pressing need to identify interesting hyperlinks to the fullest extent possible. They emphasize the actual interestingness of a hyperlink (i.e., the interestingness of the target document), instead of the perceived interestingness of the hyperlink (i.e., how interesting the link appears to the user in the context of the embedding page). Further, they consider the user's historical and persistent interests only and not the user's current, and possibly new, interests.
Although the content of a document says much about its reader's interests, so do other attributes associated with the document. In particular, the URL of a document describes the location of the document in terms of the server and the directory path on the server. The composition of a URL is potentially very useful because it reflects the hierarchical clustering of documents. Consider the following hypothetical usage pattern: a user frequently browses documents in the sports directory of one newspaper's Web site; but he seldom reads documents in the finance directory on the same Web site or sports documents on another newspaper's site. Chances are the user is very interested in the first newspaper's sports articles; more so than in the same newspaper's finance articles or the second newspaper's sports articles. Note how inferences can be made here regarding the user's interests without knowing the exact content of those articles.
As is also known in the art, it is possible to compare the relatedness, or similarity, of two entities with respect to certain properties of the entities. First, each entity is represented by a feature vector, where the elements of the vector are features characterizing the entity and each element has a weight to reflect its importance in the representation of the entity. Next, the relatedness of the two entities are computed as the distance between the two corresponding feature vectors. Such a technique is commonly used in text retrieval systems based on a comparison of content features (words and phrases) extracted from the text of documents and queries. The specifics of the feature selection procedures, feature weighting schemes, and similarity metrics are generally known to those of ordinary skill in the art. Feature selection and weighting techniques tailored for HTML content are described by D. Mladenic in Machine Learning on Non-Homogeneous. Distributed Text Data, Doctoral Dissertation, Faculty of Computer and Information Science, University of Ljubljana, Slovenia, 1998.
Accordingly, a need exists for a method for enabling a user's disconnected Web access that overcomes the deficiencies of the prior art. This method should hoard Web pages in the descending order of user-perceived interestingness, preferably considering the user's preferences on both document content and attributes.