1. Field of the Invention
This invention relates to microprocessor architecture and, more particularly, to an instruction prefetch mechanism.
2. Description of the Related Art
In various systems, the front end of a processor core typically includes an instruction fetch unit for generating fetch request to retrieve instructions from an instruction cache. On a cache hit, the fetched instructions are typically stored in a fetch FIFO or fetch queue located between the instruction fetch unit and an instruction decode unit. On a cache miss, a memory request is usually generated and sent to the next level of memory, e.g. a level 2 (L2) cache. The fetch pipeline may then be stalled until the cache miss is serviced. This usually results in a significant performance hit since it delays the execution of instructions.
In other systems, the fetch mechanism may initiate an out-of-order fetching mode while the cache miss is being serviced. During the out-of-order fetching mode, a fetch operation is performed for one or more new instructions. On a cache hit corresponding to a new instruction, the data is typically stored in the fetch FIFO. On a cache miss corresponding to the new instruction, a memory request is usually generated and sent to the next level of memory, e.g. an L2 cache. If there is a cache hit in the next level of memory, the data is typically stored in the fetch FIFO. In this implementation, entries are allocated in the fetch FIFO whether there is a cache miss or cache hit corresponding to the new instruction. Therefore, to perform the out-of-order fetch, the fetch FIFO needs to have space available for the data. Even if the fetch FIFO has available space, it may fill up during the out-of-order fetch mode and stall the process. Furthermore, in this design, the increased size and complexity of the fetch FIFO and corresponding management mechanism may increase die area and cost of the system.