The present invention relates to look-aside buffers used with computer main storage devices and more particularly to prioritizing addresses contained in such look-aside buffers as a function of the type of data identified by such addresses.
Look-aside buffers are used with large main storage devices to identify addresses of pages of memory such that the addresses are available without a requirement for significant processing to determine the address each time a page is referenced. A page of memory has a higher order address associated with it which is stored in a particular location in the look-aside buffer. Several pages correspond to a location which may have space for a limited number of the pages' addresses, such as two. Various schemes have been devised to optimize which two or more addresses are to be retained in the look-aside buffer. Such schemes usually utilize a least recently used criterion for retaining addresses.
U.S. Pat. No. 4,059,850 to Van Eck et al describes a memory system which includes a least recently used criterion for assigning priority to word groups. The priority of word groups is adjustable in that an invalid word group is assigned the lowest priority. In U.S. Pat. No. 4,437,155 to Sawyer et al a cache store for storing segments and deleting older segments is described wherein segments which are most likely to be accessed soon are read in addition to the segment specified by a command. The likely to be accessed segments are given a priority below the segment specified by the command. U.S. Pat. No. 4,322,795 to Lange et al relates to a main memory shared by two processors wherein sections of a cache are indicated as empty when one of the processors has changed corresponding data in the main memory.
When different types of data are located within a main storage device and a look-aside buffer or cache is employed to reduce access time, one type of data can replace data of another type in the buffer and cause the replaced data to be determined again.
In the case of a computer which executes instructions which require 2 or more data operands which may be on different pages of memory, and performs prefetch of a subsequent instruction specified by the current instruction, the subsequent instruction's address is inserted in a buffer with an address of one of the required data operands. Resolution of a further required operand address then results in the other required operand address being removed from the buffer. The current instruction is then restarted, one of the operand addresses resolved, the subsequent instruction address resolved, and then the next operand address resolved, causing the other operand address to be removed and a restart of the current instruction. This results in an infinite loop which was previously handled by delay of access of subsequent instructions and check-pointing the current instruction to permit it to be resumed rather than restarted. This required more instructions and resulted in reduced performance due to the additional time spent waiting for the subsequent instruction and extra checkpoint processing.