Large scale integrated circuits (LSI) that are kind of semiconductor integrated circuits, such as a central processing unit (CPU) that includes a processor core (hereinafter, “core”) that performs arithmetic processing, include a cache memory in order to increase the process speed. Furthermore, a semiconductor integrated circuit is connected to a main storage device, which is a main memory, and includes a memory access controller (MAC) that controls data storage of the cache memory and the main storage device. The cache memory serves as a memory accessible at a higher speed compared to the main storage device that is the main memory and stores only data that the CPU frequently uses out of the data that is stored in the main storage device.
When performing various arithmetic processes, the core first notifies the cache memory of a data read request in order to request data from the cache memory. When the data is in the cache memory, i.e., there is a cache hit, the cache memory transfers the data to the core. In contrast, when the data is not in the cache memory, i.e., there is a cache miss, but the data is in the main storage device, the cache memory reads the data from the main storage device and stores the data. The core then accesses the cache memory again and acquires the data from the cache memory.
When a cache control unit of the semiconductor integrated circuit detects a data read request from the core and then a cache miss occurs, the cache control unit issues a move-in request to the MAC. Upon detecting the move-in request, the MAC reads data corresponding to the move-in request, i.e., data corresponding to the cache miss, from the main storage device and transfers the data to the cache memory, and the cache memory stores the data. Furthermore, after the data is stored in the cache memory, upon detecting again a data read request from the core, the cache control unit reads the data, which is required by the core, from the cache memory and transfers the data to the core.
In recent single-core semiconductor integrated circuits, an increase in power consumption is becoming a problem that cannot be ignored and it is thought that performance improvement is reaching its limit. Such problems are dealt with by developing multi-core semiconductor integrated circuits, each includes multiple cores, and multi-bank semiconductor integrated circuits, each with a cache memory and a main storage unit that are divided into multiple banks. Such a semiconductor integrated circuit includes multiple cores, multiple MACs, multi-bank cache memories, and a control unit that controls data transfer in the semiconductor integrated circuit.
In a semiconductor integrated circuit, multiple cores access multi-bank cache memories and data is transferred from the multi-bank cache memories to each core. The multiple cores significantly improve arithmetic processing performance in the semiconductor integrated circuit. Furthermore, multiple banks increase the efficiency with which the multiple cores access the multiple cache memories so that performance in supplying data from the cache memories to the cores significantly improves. For example, Japanese Laid-open Patent Publication No. 10-111798, Japanese Laid-open Patent Publication No. 5-257859, and Japanese Laid-open Patent Publication No. 3-025558 each discloses a technique related to cache memory control.
In such a semiconductor integrated circuit, one core and one cache memory are connected to each other via a bus to ensure stable data transfer between cores and cache memories. However, if a large number of cores and cache memories are used, it is required to arrange buses corresponding to the number of cores and caches and thus the bus structure becomes complicated. This leads to a risk that the data transfer efficiency significantly decreases between cores and cache memories in the circuit.