This invention relates to a cache memory circuit which is for use between a main memory and a request source, such as a central processing unit.
A cache memory circuit of the type described is provided with a cache memory and is operable to fetch a data block from the main memory to the cache memory in response to a current one of read requests given from the request source when the cache memory fails to store the data block in question. The data block is divisible into a plurality of data units which are numbered from a leading data unit to a trailing one and which are successively transferred from the main memory to the cache memory. Such successive transfer of a data block will be called block transfer hereinunder.
On carrying out the block transfer, a conventional cache memory monitors reception of data units and becomes accessible at a time instant at which a trailing data unit is stored in the cache memory. Under the circumstances, a following one of the read requests should wait for completion of the block transfer of a data block for the current request when the following read request accesses the data block in question. Therefore, the following read request can not be processed in the cache memory circuit before completion of storage of the trailing data unit in the cache memory, despite of the fact that each data unit preceding the trailing data unit becomes accessible before storage of the trailing data unit.
As a result, when a read request needs block transfer, an access time for the read request must be determined by a time for the block transfer in addition to an access time of the main memory. Therefore, the access time for such a read request becomes undesiredly long.
In U.S. Pat. No. 4,467,419 issued to K. Wakai, a data processing unit is proposed to improve processing performance on block transfer from a main memory to a cache memory. To this end, the block transfer is carried out by the use of a data unit having an increased data width which may be twice a usual data width or more. With this structure, a following one of the read requests may be received by the cache memory during block transfer. However, the following read request can not access a data block which is in the course of the block transfer. In this event, the following read request must therefore be retarded until completion of the block transfer.