In personal computers systems (PCs), a CPU and a memory (such as a DRAM) are interconnected through a bus. Each device acts as a master device (bus master) to access the memory in which data is stored. While such memories (system memories) configured as DRAMs have a large storage capacity, they provide slower access performance. In order to achieve faster access to frequently used data, a CPU uses a cache memory (hereinafter “cache”) implemented by a memory such as an SRAM. Although a cache has a smaller storage capacity than a DRAM system memory, it can provided faster access then DRAM system memory.
In a system having a cache, coherency between the cache and the main memory (data consistency) must be maintained. One algorithm for maintaining data coherency is a snooping algorithm. FIG. 1 is a diagram for illustrating a conventional snoop operation. In FIG. 1, a CPU bus 1 and a system bus 2 are interconnected through a bus bridge 3. CPU #0 and CPU #2 are coupled onto CPU bus 1. Each of the two CPUs has a cache. Coupled onto system bus 2 are a device #2, a memory controller, and a memory.
According to the snooping algorithm, CPU #0 having a cache watches (snoops 5) for the address of data access 4 from another device #2 (master device) (FIG. 1(a)). CPU #0 issues a retry request 6 only if the access address matches the address of data in the cache of CPU #0 and the state of the data has been changed (updated) in accordance with a protocol such as the standard MESI protocol (FIG. 1(b)). In response to the retry request 6, in-progress access from the master device #2 is aborted (FIG. 1(b)). Furthermore, a cache line consisting of multiple data at contiguous addresses, including a matching address, in the cache is first written back to the memory (FIGS. 1(c) and 1(d)). Then, master device #2 accesses the memory again to transfer data, thereby maintaining the coherency of the data (FIGS. 1(e) and 1(f)).
As can be seen from the operation shown in FIG. 1, if a retry request is issued from a watched (snooped) device, a device that is transferring data must abort the access and then make access again. This means that additional operational delay due to a snoop hit on the write access decreases the bus utilization rate and increases the latency for the device and the performance of the memory system a whole.
A conventional technique for increasing memory access rate in a multiprocessor system using the snooping approach has been disclosed in Japanese Published Unexamined Patent Application No. 06-222993, for example, which is incorporated herein by reference. However, the published Unexamined Patent Application does not disclose a technique for reducing operation delay or alleviating decrease in bus utilization rate due to an access retry on a snoop hit.