A pipelined instruction execution unit includes circuitry for implementing a hierarchy of execution elements, each of which is responsive, during a given instruction cycle, to a different instruction. Instructions make their way through the instruction execution unit in sequence such that the various components of the instruction execution unit are utilized in an efficient manner. That is, the various components are kept maximally occupied with instruction decoding, effective address calculations, and other required functions. By example, during a time that a first stage of the pipeline is decoding instruction A, a second stage of the pipeline is calculating an effective address for a preceding instruction B, and a third stage is performing a virtual address translation and operand prefetch for a preceding instruction C.
One problem that arises during the use of such a pipelined instruction execution unit results from the occurrence of conditional Branch instructions. That is, a conditional Branch instruction that advances through the pipeline may or may not alter a currently pipelined sequence of instruction execution.
A number of techniques for accommodating conditional Branch instructions are known in the art. These techniques include: suspending the flow of instruction execution until the conditional Branch is resolved; causing the flow to provisionally take an assumed Branch path; provisionally exploring both possible Branch paths; and predicting an outcome of a conditional Branch based on available information.
One approach for realizing this latter technique is described in U.S. Pat. No. 4,430,706, issued Feb. 7, 1984 "Branch Prediction Apparatus and Method for a Data Processing System", by D. B. Sand This patent discloses the use of a Branch prediction memory and the use of a hash-coded version of an instruction address to access the prediction memory when a conditional Branch instruction is encountered.
The following three IBM Technical Disclosure Bulletins all refer to the use of Branch History Tables (BHTs): "Subroutine Routine Address Stack", Vol. 24, No. 7A, (12/81); "Highly Accurate Subroutine Stack Prediction Mechanism", Vol 28, No. 10 (3/86); and "Subroutine Call/Return Stack", Vol. 30, No. 11 (4/88).
As described in the first above referenced article, a BHT is used to predict branches before they are decoded and maintains a record of the last (n) taken branches by storing the target addresses of the (n) taken branches in an associative or a set associative table. This table is accessed by an instruction address when an instruction is prefetched. If there is a BHT "hit", that is, the table contains an entry for that instruction address, it is indicated that the last time this instruction was executed it was a taken Branch. When the BHT hit occurs, the BHT outputs the corresponding Target Address, which is used to redirect instruction prefetching. The underlying assumption in the use of a BHT is that there is a high probability that if a Branch was taken previously that it will be taken again, and that the Target Address will not change between successive executions of the Branch instruction.
An object of this invention is to provide a pipelined instruction unit and an associated effective address generation unit that employs a BHT, and that also includes novel enhancements to further optimize the efficiency of the instruction unit.