1. Technical Field
Example embodiments of the present invention relate in general to technologies for maintaining cache coherency and more specifically to a method for maintaining coherency between caches in a multiprocessor apparatus and an apparatus for maintaining cache coherency using the method.
2. Related Art
In general, a computer system includes three basic blocks such as a central processing unit (CPU), a memory, and an input/output (I/O) unit. These blocks are mutually connected through a bus, and input devices such as a keyboard, a mouse, and the like are used to input instructions or data through the I/O unit. The instructions and data may be stored in the memory, and the CPU may read the data stored in the memory and process the data in accordance with the stored instructions. The processed results may be stored again in the memory or output to an output device such as a printer through the I/O unit.
In particular, in recent days, computer systems utilizing multiple processors have been widely used, and in such a computer system, several tasks and functions are not processed only by a single CPU, and therefore the overall computing ability of the system may be improved.
In theory, a computer system having n processors should process an amount of work n times larger than that of a single processor, and therefore should have a speed n times faster than that of the single processor. However, in order to use multiple processors, a position of the most recent version of data is required to be known and such information is required to be known by each processor when data is needed to perform any operation, which is called data coherency.
The multiple processors have dedicated memories each usually referred to as a cache or a cache memory. The cache is used to increase the speed of an operation. In case of a processor having a cache, when any information is read by a main memory to be used by the processor, corresponding information and its main memory address are also stored in the cache memory.
The cache memory is a typically a static random access memory (SRAM). When a new read or write instruction is issued, a system may determine whether corresponding information exists in a cache memory. When the corresponding information exists in the cache memory, it is called that there is ‘hit’ (that is, the corresponding information can be utilized in a cache). Then, the corresponding information may be accessed from the cache rather than a main memory, and connection to the main memory is not required. When the corresponding information does not exist in the cache memory, new data is copied from the main memory to be stored in the cache memory for the future.
In case of a system employing, especially, multiple cache memories, data from a given memory position may simultaneously exist in the main memory and at least one cache memory. However, data in the main memory and data in the cache memory are not always the same. This case may occur when a processor updates data stored in the related cache memory without updating the main memory or another cache memory, or when another bus master changes data in the main memory without updating its copy from a processor cache memory. The bus master is the other device that can write or read instructions in a main memory.
In this manner, in the multiple cache system, in order to overcome problems which may occur when contents of a single cache memory and contents of a main memory for all caches are not identical to each other, a cache coherency protocol has been suggested.
The cache coherency protocol may be a method in which caches, processors, a main memory, and alternate bus masters perform communications with each other. The cache coherency protocol may ensure agreement maintenance between data stored in a main memory of a computer system and data stored in a cache formed on the same bus. In other words, the cache coherency protocol may be a protocol that is used by a computer system so as to trace data moving among processors, a main memory, and different cache memories.
Even in general methods that maintain coherency between caches using such a cache coherency protocol, a multiprocessor system has a limited main memory bandwidth, so that various problems may occur such as occurrence of a bottleneck phenomenon, and the like.
Therefore, there is a demand for a method for more effectively maintaining cache coherency.