In shared-memory multi-core systems, communication between processor cores (hereinafter referred to as “cores”) is performed through a shared memory. First, a core A sends a write request packet to the shared memory through an on-chip router (hereinafter referred to as “router”). Data included in a write request packet is written at a predetermined address in the shared memory. Thereafter, a core B requiring the data of the predetermined address sends a read request packet for reading data to the shared memory, and acquires data written by the core A.
Accordingly, in the inter-core communication of the shared-memory multi-core system, data is sent and received through the shared memory. A problem is that there is large latency in the communication. Further, another problem is that temporary usage of the shared memory in the inter-core communication causes a burden on the shared memory.
In a so-called message passing inter-core communication method, data is directly sent from the core A to the core B not through the shared memory. In this method, there is another problem, even though the latency can be reduced. Because it is necessary to provide a dedicated buffer for use in the inter-core communication, for each core, there are several problems, such as an increase in the installation cost, an increase in the chip area, an increase in power consumption. Further, it is necessary to assign a global address accessible by another core, to the dedicated buffer. A problem is a burden on a limited address space.