Increasingly higher requirements for a computer speed and a computation scale lead to a multiprocessor system. In the multiprocessor system, multiple processors communicate with each other by using an interconnection network. The interconnection network usually includes multiple switches, so that the interconnection network can be connected to a processor responsible for computation, and can also be connected to a memory responsible for storage. When a processor needs to access a memory, a request is forwarded to the memory by using the interconnection network. However, an increasing quantity of processors and memories results in an increasing scale of the interconnection network, and further results in an increasing access delay when a processor accesses a remote memory. Consequently, system performance deteriorates.
A method for reducing an access delay when a processor accesses a remote memory (that is, a memory connected to a port of a switch) is provided in the prior art. In the method, all switches in the interconnection network have a cache (Cache) function, so as to cache some memory data. When data that a processor needs to access is in a switch, the data may be returned directly by using a cache in the switch, so that a remote memory does not need to be accessed, and an access delay is reduced.
In a process of implementing the present application, the inventor finds that the prior art has at least the following problems:
All switches have caches, and data cached in each cache may include shared data, that is, data used by multiple processors. When shared data in a cache of a switch is modified and a copy of the shared data exists in a cache of another switch, if the copy in the another switch cannot be modified in a timely manner and the data is accessed by another processor, an error occurs. Therefore, to avoid the error of the processor, data consistency in the caches needs to be ensured. However, maintenance of cache coherence is usually extremely complex.