A data processor such as CPU (Central Processing Unit) is equipped with, in addition to a memory storing a program, an instruction cache temporarily storing instructions of the program in an attempt to improve the performance of processing. On the data processor, however, a penalty is imposed when a miss occurs, namely an instruction to be executed is not included in the instruction cache. The penalty has not been negligible for the data processor to exhibit improved performance of processing. Accordingly, a data processor has been proposed that is configured to access both the memory and the instruction cache and thereby avoid the penalty.
A data processor configured to include an instruction cache is also disclosed in PTD 1 (Japanese Patent Laying-Open No. 2008-052518) and PTD 2 (Japanese Patent Laying-Open No. 2001-142698). A CPU system disclosed in PTD 1 operates on the condition that the operating speed of a CPU is equal to or less than the operating speed during a burst read of an SDRAM. When the CPU is to process a branch instruction, a comparator determines whether the instruction cache memory stores an instruction of a branch target. When the instruction cache memory stores the instruction of the branch target, the instruction is read from the instruction cache memory.
A CPU disclosed in PTD 2 employs a memory access scheme according to which the CPU accesses a main memory simultaneously with accessing an instruction memory, so that an instruction code from the instruction memory and an instruction code from the main memory are seamlessly fetched.
In the data processor, when an instruction queue in which an instruction read from a memory is stored in advance has a free space, a fetch process that an instruction is read from the memory into the instruction queue is done regardless of an executive instruction. This fetch process is disclosed in PTD 3 (Japanese Patent Laying-Open No. 2006-048258), PTD 4 (Japanese Patent Laying-Open No. 06-161750), PTD 5 (Japanese Patent Laying-Open No. 2000-357090), and PTD 6 (Japanese Patent Laying-Open No. 05-027972).
A data processor disclosed in PTD 3 includes an instruction fetch control unit, an instruction buffer retaining an instruction fetched by the fetch control unit, and an execution unit executing instructions retained in the instruction buffer in a predetermined order and in a pipelined manner. The fetch control unit uses an instruction address of a branch instruction to acquire predictive information indicating a predicted direction of a conditional branch as well as the accuracy of prediction, is capable of fetching an instruction of a predicted branch for a conditional branch instruction and fetching an instruction of a non-predicted branch, and selectively stops fetching an instruction of a non-predicted branch in accordance with the predictive information.
A CPU disclosed in PTD 4 is equipped with an early-stage branch condition check circuit detecting the state of a tag at a prefetch timing for a branch instruction and making a branch determination in an early stage in consideration of the contents of a zero flag in a buffer or in a condition code.
A CPU disclosed in PTD 5 includes therein a branch prediction mechanism for the purpose of shortening the time taken to access a main memory when a cache miss occurs for a conditional branch instruction.
A CPU disclosed in PTD 6 has an instruction detector between an instruction queue and a memory and, when a branch instruction is included in instructions read into the instruction queue, the CPU temporarily stops reading instructions from the memory until a branch target address of the branch instruction is confirmed.