Typically computers require fast access to portions of computer memory to enable timely execution of instructions that are stored in the memory and are subsequently executed by the computer processor. Management of the location of an instruction that executes in a computer system requires allocation of the location of an instruction in a timely manner to ensure that the instruction will be available for execution without additional access of the instruction from the memory, cache memory, or another storage medium. Cache miss latency is a performance problem in the execution of computer-based instructions. It will be appreciated that cache memory is a small, fast unit of the memory and may be located close to the processor to ensure fast access to information in the cache by the processor. The terms "cache" and "cache memory" will be used interchangeably herein.
Typically the speed of operation of the processor is faster than the speed of access to cache memory. When the processor accesses information in the cache this is referred to herein as a "cache hit." When the processor is not able to access information in the cache this is referred to herein as a "cache miss." Cache miss latency has increased as the disparity between the speed required for processor operations and the speed required to access the memory has increased.
Pre-fetching is the fetching of instructions into the cache before they are needed. The pre-fetching distance is the elapsed time between initiating and using the result of the pre-fetch and should be large enough to hide cache miss latency. However, the pre-fetch distance should not be so large that the pre-fetched instructions are displaced by other information placed in the cache before the pre-fetched instructions are used. Therefore, timeliness is the measure of whether an instruction is pre-fetched before it is needed but not pre-fetched so soon that it must be discarded before it can be used. Generating timely pre-fetches has been a problem with pre-fetching solutions.
A pre-fetch is useless if it brings a line into the cache which will not be used before it is displaced. A pre-fetch is accurate if it is actually used. It will be appreciated that a "line" includes at least one instruction and represents a unit of instructions that may be pre-fetched on a computer system.
A problem with pre-fetching is obtaining the appropriate coverage of a pre-fetch. It will be appreciated that coverage is the identification of useful pre-fetched instruction requests while minimizing useless pre-fetched instruction requests. Attempting to obtain optimal coverage can increase the probability of useless pre-fetches. That is, larger amounts of pre-fetched instructions may increase the probability of useless pre-fetches. The pre-fetch distance should be large enough to hide the cache miss latency while not being so large as to increase the amount of unnecessary pre-fetches and has been a problem in the past.
Pre-fetching problems are discussed with reference to Cooperative Prefetching: Compiler and Hardware Support for Effective Instruction Prefetching in Modern Processors," Chi-Keung Luk and Todd C. Mowry, Proceedings of Micro-31, Nov. 30-Dec. 2, 1998, and Prefetching using Markov Predictors, Doug Joseph and Dirk Grunwald, 1997 Proceedings of the International Symposium on Computer Architecture, June 1997.