1. Field of the Invention
This invention is related to the field of processors and, more particularly, to instruction fetching mechanisms within processors.
2. Description of the Related Art
Superscalar processors achieve high performance by executing multiple instructions per clock cycle and by choosing the shortest possible clock cycle consistent with the design. As used herein, the term xe2x80x9cclock cyclexe2x80x9d refers to an interval of time accorded to various stages of an instruction processing pipeline within the processor. Storage devices (e.g. registers and arrays) capture their values according to the clock cycle. For example, a storage device may capture a value according to a rising or falling edge of a clock signal defining the clock cycle. The storage device then stores the value until the subsequent rising or falling edge of the clock signal, respectively. The term xe2x80x9cinstruction processing pipelinexe2x80x9d is used herein to refer to the logic circuits employed to process instructions in a pipelined fashion. Although the pipeline may be divided into any number of stages at which portions of instruction processing are performed, instruction processing generally comprises fetching the instruction, decoding the instruction, executing the instruction, and storing the execution results in the destination identified by the instruction.
A popular instruction set architecture is the xc3x9786 instruction set architecture. Due to the widespread acceptance of the xc3x9786 instruction set architecture in the computer industry, superscalar processors designed in accordance with this architecture are becoming increasingly common. The xc3x9786 instruction set architecture specifies a variable byte-length instruction set in which different instructions may occupy differing numbers of bytes. For example, the 80386 and 80486 processors allow a particular instruction to occupy a number of bytes between 1 and 15. The number of bytes occupied depends upon the particular instruction as well as various addressing mode options for the instruction.
Because instructions are variable-length, locating instruction boundaries is complicated. The length of a first instruction must be determined prior to locating a second instruction subsequent to the first instruction within an instruction stream. However, the ability to locate multiple instructions within an instruction stream during a particular clock cycle is crucial to superscalar processor operation. As operating frequencies increase (i.e. as clock cycles shorten), it becomes increasingly difficult to locate multiple instructions simultaneously.
Various predecode schemes have been proposed in which a predecoder appends information regarding each instruction byte to the instruction byte as the instruction is stored into the cache. As used herein, the term xe2x80x9cpredecodingxe2x80x9d is used to refer to generating instruction decode information prior to storing the corresponding instruction bytes into an instruction cache of a processor. The generated information may be stored with the instruction bytes in the instruction cache. For example, an instruction byte may be indicated to be the beginning or end of an instruction. By scanning the predecode information when the corresponding instruction bytes are fetched, instructions may be located without actually attempting to decode the instruction bytes. The predecode information may be used to decrease the amount of logic needed to locate multiple variable-length instructions simultaneously. Unfortunately, these schemes become insufficient at high clock frequencies as well. A method for locating multiple instructions during a clock cycle at high frequencies is needed.
The problems outlined above are in large part solved by a line predictor as described herein. The line predictor caches alignment information for instructions. In response to each fetch address, the line predictor provides information for the instruction beginning at the fetch address, as well as up to one or more additional instructions subsequent to that instruction. The alignment information may be, for example, instruction pointers, each of which directly locates a corresponding instruction within a plurality of instruction bytes fetched in response to the fetch address. The line predictor may include a memory having multiple entries, each entry storing up to a predefined maximum number of instruction pointers and a fetch address corresponding to the instruction identified by a first one of the instruction pointers. Additionally, each entry may include a link to another entry storing instruction pointers to the next instructions within the predicted instruction stream, and a next fetch address corresponding to the first instruction within the next entry. The next fetch address may be provided to the instruction cache to fetch the corresponding instruction bytes.
If the terminating instruction within the entry is a conditional branch instruction, the next fetch address may be the branch target address or the sequential address, depending upon the condition outcome. On the other hand, if the terminating instruction within the entry is an indirect branch instruction, the next fetch address (i.e. the branch target address) may be variable based on the operands of the indirect branch instruction. If the terminating instruction is a return instruction, the next fetch address is the sequential address to the most recent call instruction. Accordingly, the line predictor is trained with respect to the next fetch address (and next index within the line predictor, which provides the link to the next entry). As line predictor entries are created, a set of branch predictors may be accessed to provide an initial next fetch address and index. The initial training is verified by accessing the branch predictors at each fetch of the line predictor entry, and updated as dictated by the state of the branch predictors at each fetch.
For example, conditional branches may be predicted taken or not-taken, and the next fetch address may be set to the branch target address or sequential address accordingly. Additionally, a next alternate fetch address (and index within the line predictor) may be stored for the entry (corresponding to the non-predicted target or sequential path). If the prediction stored in the line predictor entry disagrees with the branch predictor during a particular fetch, the alternate fetch address and index may be used. Furthermore, the line predictor may swap the next fetch and next alternate fetch fields to reflect the more recent prediction.
The branch predictors may include an indirect branch target cache and a return stack for predicting indirect branch target addresses and return addresses, respectively. The next fetch address in an entry terminated by an indirect branch instruction may be verified against a predicted address from the indirect branch target cache. Similarly, the next fetch address in an entry terminated by a return instruction may be verified against the top of the return stack. If a mismatch occurs, the predicted address from the corresponding branch predictor is used and the line predictor entry is updated with the newly predicted address. The line predictor may provide a rapid means for providing next fetch addresses (and next indexes), while the branch predictors may, in parallel, provide accurate branch predictions for the branches. The line predictor may be updated to reflect the branch predictor state, thereby tracking the branch predictors.
Broadly speaking, a processor is contemplated, comprising a fetch address generation unit configured to generate a fetch address and a line predictor coupled to the fetch address generation unit. The line predictor includes a first memory comprising a plurality of entries, each entry storing a plurality of instruction pointers and a next entry indication. The line predictor is configured to select a first entry (of the plurality of entries) corresponding to the fetch address. If one of a first plurality of instruction pointers within the first entry identifies a branch instruction, the next entry indication identifies a next fetch address as one of a branch target address or a sequential address of the branch instruction. Additionally, a computer system is contemplated including the processor and an input/output (I/O) device configured to communicate between the computer system and another computer system to which the I/O device is couplable.
Moreover, a method is contemplated. A fetch address is generated. A plurality of instruction pointers and a next entry indication are selected from a line predictor. The plurality of instruction pointers and the next entry indication correspond to the fetch address. A next fetch address is generated responsive to the next entry indication, wherein the next fetch address comprises one of a branch target address or a sequential address if one of the plurality of instruction pointers identifies a branch instruction.