1. The Field of the Invention
The present invention relates to network caching mechanisms; and more specifically, to network caching mechanisms in which policies govern caching conditions.
2. Related Technology
Computing technology has transformed the way we work and play. Computer networking, in particular, allows individual computers to communicate information between each other. Computer networks can be conglomerated together to allow for expansive communication. The Internet, for example, is a well-known global computer network that consists of amalgamations of smaller interconnected networks.
One primary use of a network is to allow users or applications to access remotely-stored information. Remotely-storing information (as opposed to locally-storing the information on the local computer system) is advantageous in that it allows many computer systems access to the information. In addition, the local computer system need not waste memory resources (e.g., disk space or system memory) retaining the information unless the local computer system specifically requests the information be downloaded. Accordingly, only remotely-stored items that are interesting enough to be downloaded occupy memory resources.
Although networks are useful, remotely storing information does have its disadvantages. For example, there is sometimes a significant degree of latency in networks. In other words, it can take some time between the time information is requested, and the time the information is received. Also, the network bandwidth is limited. For example, it can take notable time to download larger pieces of information from (and upload such information to) the remote site. If the information was locally stored, however, there would be relatively little latency and bandwidth involved with the read/write channel from and to local memory.
Network caching is a known technology that balances the benefits of remotely-storing information on a network with the benefits of locally-storing information on the local computer system. In particular, when remotely accessing information, a copy of the information is made and stored locally. That copy represents the state of the information as it existed at the time the copy was made. Accordingly, when requesting the information, the local computer system may obtain the information from the locally-stored copy, rather than from the remote location. This allows for fast access times and preserves network bandwidth.
However, if new changes are made to the remotely-stored information, the local copy will not change absent synchronization of the information. Accordingly, the local copy is said to have a certain degree of staleness. Although accessing the cached copy of the information is fast, there is typically no guaranty that the information is still accurate as compared to the information that is remotely-stored.
Conventional network caching mechanisms may generally be broken down into two categories. One category of caching mechanism is illustrated in FIG. 1 as off-line caching mechanism 100. In the off-line caching mechanism 100, the application 110 obtains a local copy of data 111A from an off-line store 112. A remote service 113 stores a remote copy of data 111B. At the time of the last synchronization, the local copy of the data 111A was the same as the remote copy of the data 111B. A synchronization mechanism 114 facilitates the occasional synchronization of the remote copy of the data 111B with the local copy of the data 111A.
The off-line caching mechanism is advantageous in that it allows the application 110 to communicate with a local off-line store 112. Accordingly, access to the data occurs relatively quickly. However, the application 110 has little control over when the synchronization mechanism 114 performs synchronization.
Another conventional model is illustrated in FIG. 2 as the transparent caching mechanism 200. In this model, the application 210 makes requests for the remote data 211B and intends to communicate directly with the remote service 213 to make that request. This request and the corresponding response are represented by bi-directional arrow 215. However, unbeknownst to the application 210, the requests are intercepted by a local cache 212, which determines whether a local copy of the data 211A is available and sufficiently fresh to provide to the application 210, instead of going to the remote service 213 for the remote copy of the data 211B. If the local cache 212 has such a copy of the data, the local cache 212 returns a response back to the application 210. If not, the request is allowed to proceed to the remote service 213, which ultimately provides the corresponding response.
The transparent caching mechanism 200 has an advantage in that the application 210 accesses accurate data most of the time since the request will be satisfied by providing a remote copy of the data 211B or by providing a local copy of the data 211A, which is usually relatively fresh. However, the application 210 does not know whether the data returned in the response is a local copy of the data 211A or a remote copy of the data 211B. Accordingly, the application 210 does not know whether the data is completely fresh, or whether there is some staleness. Different applications may have different needs for how fresh the data should be. For example, suppose that the local copy of the data 211A is two hours old, this may be sufficiently fresh for an encyclopedia application, but may be too stale for other applications such as, for example, a stock ticker application that needs stock information that is no more than 15 minutes old.