There exist applications in which there is a need to repeatedly allocate data to stacks or locations within a queue based on priority, e.g. anticipated need for same. One method of doing so is a least recently used (LRU) algorithm. In an LRU algorithm, if the content of a storage location or memory location must be replaced with the content of another such location (other than to update what is stored therein), then to provide the needed space, the least recently used such value is removed. The traditional least recently used algorithm has a high degree of overhead associated with it. This is because every access to the resource causes some update by the management algorithm.
Numerous patents refer to the use of an LRU algorithm to manage a cache. One such patent is U.S. Pat. No. 4,489,378 "Automatic Adjustment of the Quantity of Prefetch Data in a Disk Cache Operation" issued Dec. 18, 1984 to Jerry D. Dixon et al. In that patent, the LRU table has one listing for each page in cache memory. The forward pointer in each listing of the LRU table points to the listing of a more recently used page, and the forward pointer of the most recently used listing points to the first free page. Similarly, the backward pointer of each LRU listing points to a less recently used page and the last of that chain is the least recently used page. When a page is written, the page then becomes the most recently used listing in the LRU table. The listing's backward pointer points to the most recently used listing, and its directory pointer points to the first free page.
U.S. Pat. No. 4,464,712 "Second Level Cache Replacement Method and Apparatus" issued Aug. 7, 1984 to Robert P. Fletcher discloses a two level cache where the first level is a fast yet limited size cache in use by the processor. The second level is a slower yet larger cache which contains data that is already in the first level cache as well as additional data. Both caches are managed on a least recently used method. For that method, "use" for the second level is defined as any access either directly to the second level cache or to the first level cache where the data in the first level cache is also in the second level cache. Thus, there is duplication of data in the caches.