The present invention generally relates to a block access system using a cache memory, and in particular to a block access system using a cache memory in which a cache memory, provided on both sides of bus lines between a central processing unit or an operation unit and a main or external memory unit, holds data from the main memory so that the central processing unit can reach it quickly. The present invention further relates to a microprocessor in which a cache memory, an operation unit, and a control circuit for controlling a cache-in operation are built.
Block access systems using cache memories are widely being incorporated in computer systems. In general, cache memories are externally coupled with a central processing unit (hereafter simply referred to as CPU) in computer systems or are built in the CPU. Currently, microprocessors including cache memories are available. Cache memories are used to hold information from a main memory unit or an external memory unit so that the CPU can get it quickly.
In general, when the CPU does not find necessary data such as numerical values and instructions in the cache memory (at the time of miss hit), the CPU inputs the necessary data into the cache memory from the memory unit. At this time, data included in a memory region (a block) of a predetermined size is transferred from the memory unit to the cache memory in accordance with a sequence. In other words, data is transferred per block. Such an operation is called a cache-in operation. Generally, mutually related data are stored in the memory unit with successive addresses. Therefore, at the time of the cache-in operation, one data block is transferred to the cache memory within a single bus cycle in response to a single address supplied to the memory unit by the CPU. This is called a block access. Generally, one block consists of a plurality of words. Therefore, at the time of the block access, these words are successively transferred within one bus cycle in response to one address from the CPU.
In a conventional block access system, a timing for transferring each of the words in the one block after the request of the block access from the CPU is predetermined for individual computer systems. For this reason, the block access is always carried out in accordance with the predetermined timing.
The main memory unit is generally built by various memories of different access times such as dynamic random access memories (hereafter simply referred to as D-RAMs) and static random access memories (hereafter simply referred to as S-RAM). As well known, access times of S-RAMs are shorter than those of D-RAMs. When a computer system is built by memories of different access times such as the D-RAMs and S-RAMs, the timing for the block access must be selected so as to be conformable to the memory having the longest access time. For this reason, the conventional block access cannot be carried out at a high-speed, and thus system performances are poor.
Further, the conventional block access has the following disadvantage. Generally, in order to build the computer system using the block access, memories or memory regions in the memory unit which are subject to the block access must be designated at the time of building the computer system. The CPU requests the block access only for memories or memory regions. For this reason, the degree of flexibility of system design is poor.