A computer is configured to decrease a period of waiting time for referring to a main memory by disposing a cache memory being higher in speed than the main memory between a processor and the main memory, and retaining data read from the main memory on the cache memory.
However, a numerical calculation process and other equivalent processes using a large-scale data encounter frequent occurrence of cache misses due to low locality of data reference, and are disabled from sufficiently reducing the waiting time for referring to the main memory as the case may be. A known technique for coping with the cache misses described above is a prefetching technique to fetch the data to the cache memory from the main memory in advance of using the data.
The prefetching technique is roughly classified into two types, i.e., software prefetching and hardware prefetching. The software prefetching is a method of preparing a prefetching command for a processor and inserting the prefetching command into a program. On the other hand, the hardware prefetching is a method by which a hardware component dynamically detects a data access pattern, then predicts data to be accessed next, and dynamically prefetches the predicted data.
A known hardware prefetching related technique is a technique of determining a prefetching target data area by automatically detecting a data transfer having continuity in address. Another known technique is a stride prefetching technique of detecting a data access at a fixed interval (which will hereinafter be also termed a stride width).
[Patent document 1] Japanese Laid-Open Patent Publication No. 2000-112901
[Patent document 2] Japanese Laid-Open Patent Publication No. 08-212081