Computers have become an integral tool used in a wide variety of different applications, such as in finance and commercial transactions, three-dimensional and real-time graphics, computer-aided design and manufacturing, healthcare, telecommunications, education, etc. Computers are finding new applications as performance and speeds ever increase while costs decrease due to advances in hardware technology and rapid software development. Furthermore, the functionality and usefulness of a computer system can be dramatically enhanced by coupling stand-alone computers together to form a computer network. In a computer network, users may readily exchange files, share information stored on a common database, pool resources, communicate via e-mail and even video teleconference.
One popular type of network setup is known as the “client/server” computing network. Basically, users perform tasks through their own dedicated desktop computer (i.e., the “client”) and the desktop computer is networked to larger, more powerful central computers (i.e., “servers”). Servers are high-speed machines that hold programs and data shared by network users. For a better understanding of a client/server computer network, please refer now to FIG. 1. FIG. 1 shows a conventional client/server computer network 100. The network 100 includes a plurality of client computers 101–106 coupled to a network of remote server computers 110.
An assortment of network and database software enables communication between the various clients and the servers. Hence, in a client/server arrangement, the data is easily maintained because it is stored in one location and maintained by the servers; the data can be shared by a number of local or remote clients; the data is easily and quickly accessible; and clients may readily be added or removed.
In today's networking environment, many clients desire higher bandwidth and lower latency (delay between the request and the responses) to access many web and streaming media applications. This can be accomplished by providing caching servers at more local points in the network that keep copies of files previously retrieved from the remote servers for subsequent repeated access by the local clients. The theory underlying caching is that since the same file may be used more than once, it may be more efficient (both in terms of speed and resource utilization) to keep a copy locally rather than retrieve it a second time from a remote source. Typically, each caching server caches a small set of “hot” recently accessed objects in a fast and relatively expensive random access memory attached to its internal bus, and a somewhat larger set of such objects in a slower and cheaper random access peripheral storage device such as a magnetic or optical disk.
Prefetching is a known technique for analyzing current and/or past file requests to predict what files are likely to be requested in the future. Those predictions are then utilized to retrieve files from a remote server on a less urgent basis before they are actually requested, thereby reducing not only latency but also network congestion. It differs from caching in that the focus is not on whether to keep a local copy of a file that has already been retrieved or updated (which is mostly a question of how best to use the available local storage capacity) but rather on whether to obtain from the remote server a file that is not currently available locally and that is not currently the subject of any pending requests. Since predicting what files are likely to be requested in the future involves a plethora of prediction criteria, it is desirable that these predictions be completed as comprehensively and efficiently as possible.
Accordingly, what is needed is a method and system for prefetching objects from a network in a comprehensive and efficient fashion. The method and system should be simple, cost effective and capable of being easily adapted to existing technology. The present invention addresses these needs.