Instruction and operand prefetching maximize the efficiency of a pipelined machine because they keep each stage of the pipeline busy. Prefetching can be done on sequential as well as branch paths. Most of the time, the data needed are resident in the cache, translation lookaside buffer (TLB) or ART look-aside buffer (ALB) and are immediately available. However, for cases where the data are not found, more time is required to fetch them from the storage subsystem, for dynamic address translation (DAT) or for access register translation (ART).
A problem arises when the miss is due to a prefetch along predicted paths which may or may not be taken. If the data will indeed be used, then performance benefits. However, if it turns out that the data will not be used, the cache, TLB or ALB latency increases and performance is degraded. In addition, this new fetch displaces a cache line, TLB entry or ALB entry that may be needed later.