The present invention relates generally to the field of data prefetching, and more particularly to assigning prefetch resources to threads running on a multi-core system, thus controlling bandwidth allocation to each different thread.
In computer architecture, instruction prefetch is a technique used in microprocessors to speed up the execution of a program by reducing wait states. Modern microprocessors are much faster than the memory where the program is kept, meaning that the program's instructions cannot be read fast enough from memory to keep the microprocessor busy. Adding a cache can provide faster access to needed instructions.
Prefetching occurs when a processor requests instructions or data from main memory before the instructions or data are actually needed. Once the instructions or data come back from memory, the instructions or data are placed in a cache. When instructions or data are actually needed, the instructions or data can be accessed much more quickly from the cache than if the processor had to make a request from memory.
Since program instructions are generally executed sequentially, performance is likely to be best when instructions are prefetched in order. Alternatively, the prefetch may be part of a complex branch prediction algorithm, where the processor tries to anticipate the result of a calculation and fetch the right instructions in advance. In the case of data prefetching, the prefetcher can take advantage of spatial locality usually found in most applications.