1. Field of the Invention
The present invention relates generally to the field of computer networking, and in particular to a method for maintaining cache storage which relieves the cache of unnecessarily high volume and cost resulting from cache object replacements due to requests from so-called xe2x80x9cauto-fetchxe2x80x9d utilities.
2. Related Art
In network devices equipped with cache storage, it is common to implement a cache replacement policy based on an LRU (Least-Recently Used) technique. That is, the least recently referenced objects (or pages) in the cache are removed from storage for replacement. Some existing LRU-based policies use semantics of object sizes and types for further optimization.
Recently, so-called xe2x80x9cauto-fetchxe2x80x9d utilities have gained popularity with users who routinely browse the World Wide Web (xe2x80x9cthe Webxe2x80x9d). These utilities are designed for off-line browsing, retrieving predetermined Web objects of particular interest to the user during off-line times, thereby reducing the user""s on-line browsing time by ensuring the objects are already available when the user logs on. Where the user accesses the Web through a network proxy, however, auto-fetch utilities tend to have an undesirable adverse effect on a proxy cache that uses a conventional LRU-based replacement policy. Since the auto-fetch utility can continuously generate arbitrarily large numbers of requests for Web objects to the network proxy, popular objects (or pages) for the majority of so-called xe2x80x9cnormalxe2x80x9d users (that is, those not using auto-fetch utilities) are replaced by those objects requested by auto-fetch utilities. As a result, normal users may experience longer visible latencies than they otherwise would, due solely to the abnormally large volumes of cache objects attributable to auto-fetch requests. Moreover, the problem is not limited to network proxies. The same problem can arise on a network server, such as a content server, which serves large numbers of users. Again, so-called xe2x80x9cnormalxe2x80x9d users may experience degraded performance when accessing such a server due to inordinate resource demands of auto-fetching utilities.
In view of the increased popularity of auto-fetch utilities, there is a need for a method of ensuring that users of such utilities do not unfairly monopolize shared cache resources to the detriment of other users.
According to an embodiment of the present invention, a method for maintaining a common resource shared by a plurality of entities, wherein the common resource contains a plurality of entries, each of which is associated with one of the plurality of entities, includes determining an amount of the common resource occupied by entries associated with a given one of the plurality of entities. A number of the associated entries are removed from the common resource to reduce the occupied amount if the occupied amount exceeds a predetermined threshold.
According to another embodiment of the present invention, a method for maintaining a cache storage resident in a network device coupled to a plurality of client devices, wherein the cache storage contains a plurality of cached objects, includes determining an amount of cache resource occupied by cached objects associated with a given client device. A number of those cached objects are removed from the cache storage to reduce the amount of occupied cache resource if the amount exceeds a predetermined threshold.