Cache modules are high-speed memories that facilitate fast retrieval of information including data and instructions. Typically, cache modules are relatively expensive and are characterized by their small size, especially in comparison to higher-level memory modules.
The performance of modern processor-based systems usually depends upon the cache module performances and especially on the relationship between cache hits and cache misses. A cache hit occurs when an information unit, present in a cache module memory, is requested. A cache miss occurs when the requested information unit is not present in the cache module and has to be fetched from an alternative memory, termed as a higher-level memory module.
Various cache modules and processor architectures, as well as data retrieval schemes, were developed over the years to meet increasing performance demands. The cache architecture included multi-port cache modules, multi-level cache module architecture, super scalar type processors and the like.
Processors and other information requesting components are capable of requesting information from a cache module and, alternatively or additionally, from another memory module that can be a higher-level memory module. The higher-level memory module can also be a cache memory, another internal memory and even an external memory.
There are various ways to write information to a cache module or a higher-level memory module. Write-through involves writing one or more information units to the cache module and to the higher-level memory module simultaneously. Write-back involves writing one or more information units to the cache module. The cache module sends one or more updated information units to the high-level memory, once the updated information unit or units are removed from the cache. The latter operation is also known in the art as flushing the cache.
Some prior art cache modules perform mandatory fetch operations, hardware initiated fetch operations (also known as speculative fetch operations or as speculative pre-fetch operations) and user initiated pre-fetch operations (also known as software pre-fetch requests). A mandatory fetch operation involves fetching an information unit that caused a cache miss. The speculative fetch operations are aimed to reduce cache miss events, and replace not-valid segments with valid segments. User initiated pre-fetch request can be initiated by a program being executed by a processor. The user initiated pre-fetch requests aim to send data to the cache module before the execution of the program results in cache misses.
A typical scenario of user initiated pre-fetch utilization is in image processing. If a certain area of an image should be processed and image data that represents that area can not be fetched during a single fetch operation then the program can include pre-fetch instructions that fetch a required image area to the cache module before the program starts processing the image data. A single user initiated pre-fetch instruction can program the cache to start a user initiated pre-fetch request sequence that would bring all the necessary data blocks to the cache.
Pre-fetch operations generate additional load on the machine's resources. This may result in performance degradation and stalling of the mandatory triggering operations.
There is a need to provide an efficient method and a device for performing a pre-fetch operation, with minimal performance impact and maximal bus utilization.