1. Field of the Invention
The present invention relates to techniques for improving the performance of computer systems. More specifically, the present invention relates to a method and apparatus for merging checkpoints in a processor that supports speculative-execution.
2. Related Art
Advances in semiconductor fabrication technology have given rise to dramatic increases in microprocessor clock speeds. This increase in microprocessor clock speeds has not been matched by a corresponding increase in memory access speeds. Hence, the disparity between microprocessor clock speeds and memory access speeds continues to grow, and is beginning to create significant performance problems. Execution profiles for fast microprocessor systems show that a large fraction of execution time is spent not within the microprocessor core, but within memory structures outside of the microprocessor core. This means that the microprocessor systems spend a large fraction of time waiting for memory references to complete instead of performing computational operations.
Efficient caching schemes can help reduce the number of memory accesses that are performed. However, when a memory reference, such as a load generates a cache miss, the subsequent access to level-two (L2) cache or memory can require dozens or hundreds of clock cycles to complete, during which time the processor is typically stalled (and therefore idle), performing no useful work.
A number of forms of “speculative execution” have been proposed or are presently used to prevent the processor from stalling when a cache miss occurs. For example, some processor designers have proposed generating a checkpoint and entering a “scout mode” during processor stall conditions. In scout mode, instructions are speculatively executed to prefetch future loads and stores, but results are not committed to the architectural state of the processor. For example, see U.S. patent application Ser. No. 10/741,944, filed 19 Dec. 2003, entitled, “Generating Prefetches by Speculatively Executing Code through Hardware Scout Threading,” by inventors Shailender Chaudhry and Marc Tremblay. The scout mode technique allows a processor to perform computations during stall conditions which enables the processor to prefetch future loads and stores. However, the scout mode technique suffers from the disadvantage of having to re-compute results of computational operations that were performed during scout mode.
To avoid performing some of these re-computations, processor designers have proposed entering an “execute-ahead” mode when the processor encounters a data-dependent stall condition. In execute-ahead mode, the processor defers instructions that cannot be executed because of unresolved data dependencies and executes other non-deferred instructions in program order.
When a data dependency is ultimately resolved, the processor transitions to a “deferred mode” to execute the deferred instructions. In deferred mode, the processor executes deferred instructions that are able to be executed while re-deferring deferred instructions that still cannot be executed because of unresolved data dependencies. For example, see U.S. Pat. No. 7,114,060, filed 14 Oct. 2003, entitled, “Selectively Deferring the Execution of Instructions with Unresolved Data Dependencies as They Are Issued in Program Order,” by inventors Shailender Chaudhry and Marc Tremblay.
By allowing a processor to continue to perform work during processor stall conditions, the above-described speculative-execution techniques can significantly increase the amount of computational work the processor completes.
Unfortunately, the computational work performed during execute-ahead mode can be lost when the processor encounters a condition which requires the processor to return to a previous checkpoint. Because thousands of instructions can be executed during execute-ahead mode, the lost computational work can significantly reduce processor performance.
In order to avoid losing this computational work, some processor designers have proposed using multiple checkpoints to avoid returning to a remote checkpoint. For a more detailed explanation of setting multiple checkpoints see pending U.S. patent application “The Generation of Multiple Checkpoints in a Processor that Supports Speculative Execution,” by inventors Shailender Chaudhry, Marc Tremblay, and Paul Caprioli, having Ser. No. 11/084,655, and filing date 18 Mar. 2005. In such a system, a processor can generate additional checkpoints when it encounters certain conditions. For example, the processor can generate an additional checkpoint if the processor encounters: an independent load miss; a predicted branch instruction with an unresolvable data dependency; a memory barrier or atomic instruction; or if the number of instructions executed since the previous checkpoint reaches a predetermined number. The processor then returns to the more-recently generated checkpoint instead of returning to the remote checkpoint, which minimizes the redoing of computational work.
Unfortunately, the number of checkpoints that a system can support is limited by practical considerations, such as processor area constraints. Hence, even processors that support multiple checkpoints cannot support a large number of checkpoints. Consequently, at runtime a processor can allocate all available checkpoints and then cannot allocate additional checkpoints, which can adversely affect the performance of the processor.
Hence, what is needed is a processor that supports multiple checkpoints without the above-described problem.