1. Field of the Invention
The present invention relates to a cache memory system.
2. Description of the Related Art
As the microprocessor increases its speed, hierarchically-structured cache memory has become popular to speed up access to memory. When data to be accessed is not in the cache memory of a cache memory system, an attempt to get data from cache memory results in a miss hit and data is transferred from main memory to cache memory. Therefore, when desired data is not in cache memory, the processor must suspend its processing until the transfer of data from main memory to cache memory is finished, decreasing the processing capacity.
To increase the cache memory it ratio, various methods have been proposed. For example, Japanese Patent Publication Kokai JP-A No. Hei 4-190438 discloses a method in which a program execution flow is read ahead to bring data, which will be used at branch addresses, into cache memory in advance.
FIG. 3 is a block diagram showing the configuration of conventional cache memory. As shown in FIG. 3, a central processing unit CPU 301 is connected to cache memory CM 303 and to a cache controller CMC 304 via an address bus 311 and a data bus 321. A sub-processing unit SPU 302 is connected to the cache controller CMC 304 via an address bus 312 and the data bus 321, and to the cache memory CM 303 via the data bus 321. The sub-processing unit 302 monitors instructions to be sent to the central processing unit CPU 301 via the data bus 321. Upon detecting a cache update instruction the compiler automatically inserted before a jump instruction, the sub-processing unit 302 tells the cache controller CMC 304 to update cache memory. The cache controller CMC 304 itself does not update cache memory; instead, the cache controller CMC 304 passes update address information to a DMA controller 305 and causes it to start transferring data from main memory 306 to a location in cache memory 303 indicated by the address information. This cache update instruction, meaningless to the central processing unit CPU 301, is ignored. After that, when control is passed to the jump instruction, no hit miss occurs because data has already been sent from main memory 306 to cache memory CM 303.
Another proposed method is that the sub-processing unit SPU 302 fetches an instruction which is several instructions ahead of the current instruction to find a miss hit in advance and to cause the cache controller CMC 304 to update cache memory.
A general mechanism of cache memory is described for example, in "Computer Configuration and Design" (Nikkei BP).
However, the prior art described above has the following problems.
The first problem is that the system according to the prior art requires hardware specifically designed to monitor programs, resulting in a large-sized circuit.
The second problem is that reading an instruction that is several instructions ahead of the current instruction requires the memory to have two or more ports. Normally, memory with two or more ports is large.
The third problem is that, because the update instruction is inserted automatically by a compiler into a location that is several instructions ahead of the current instruction, the cache memory update start time cannot be set freely. Therefore, even when it is found that it takes longer to update cache memory because of an increase in the cache memory block size or in the main memory access time, cache memory updating cannot be started at an earlier time. This sometimes results in cache memory updating not being completed within a predetermined period of time.
The fourth problem is that the method of automatically inserting a jump instruction, through the use of a compiler, into a location several instructions ahead of the current instruction requires the compiler to have that function built-in, increasing the development cost of development tools such as a compiler.