1. Field of the Invention
This invention relates to a tightly coupled multiprocessor system which comprises main memory and a plurality of processors and to a real memory management method effectual for a large scale computer system which is connected to a large number of main memories and manages a virtual memory by paging or swapping.
2. Description of the Prior Art
The tightly coupled multiprocessor system which comprises a main memory and a plurality of processors is increasingly used for large scale general-purpose computer systems as a technique for promoting high performance and reliability in computer systems. An important problem for the tightly coupled multiprocessor system what type of algorithm is to use for allocating tasks or computer resources to a plurality of processors. The master/slave system is a system in which only a predetermined processor (master processor) performs system resource management and task scheduling. Although the above system has a comparatively simple structure, it has disadvantages in that the master processor is subjected to a substantial load and is affected by faults, and hence it is not a tightly coupled multiprocessor system in the true sense. The system having, individual operating systems (OS) is a system in which an operating system is provided for each processor and each operating system performs resource management of the assigned processor and task scheduling. In such a system, it is hard to equalize the load balance and to improve the general performance. The system of 1-operating system is a system in which an operating system performs processing on one of the processors when necessary. This is a tightly coupled multiprocessor system in the true sense. For further details of the above multiprocessor systems, refer to, for example, "Operating System" by S. E. Madnick.
As mentioned above, descriptions of many multiprocessor systems have been published as examples techniques on task scheduling and resource management allocation.
Another system which is increasingly used as a mechanism for high performance computer systems is a system in which the apparent access speed of a main (real) memory is increased by using a high speed buffer memory. This buffer memory, called a cache or buffer, cannot be seen from the processor, but is located between the processor and the main memory and stores a region of the main memory which is frequently referred to. Such kind of buffer memory is disclosed e.g. in a book written by Ishida and Murata entitled Very Large Scale Computer System; Dec. 1970. When the processor presents an access request to the main memory and the requested region is retained in the cache, the processor can fetch the region from the cache at a high speed instead of the main memory. Since the access time of the main memory is long compared with the processing time of the processor, this cache system whose access speed is higher than the access speed of the main memory (for example, the cache access time is about 1/10 of the main memory access time) is effectual for improving the performance of computer systems.
This cache system can be effectively applied to computer systems using the above tightly coupled multiprocessor mechanism. However, there are some problems on application of the cache system. The first problem is a phenomenon which is called cache cancel. The cache is required to be installed close to the processor to allow for quick access by the processor. Therefore, a cache memory exists for each processor. This configuration may cause a region (assumed as region A) in a main memory to exist in two or more caches (assumed as caches X and Y). In this case, when the processor corresponding to the cache X presents a write request to the region A, the data a in the region A of the cache X is rewritten, for example, to a'. As a result, the data a' contradicts the data a in the region A of the cache Y, and the region A of the cache Y is ignored (for example, erased). By doing this, the effectiveness of the cache Y is decreased. This is referred to as a cache cancel phenomenon. Such a phenomenon may occur when the data of the cache Y is rewritten. As the size of the region in a cache increases to increase the effect of the cache, the region of the cache of the other processors is ignored, resulting in a reduction in the performance of the entire system.
Another problem with memory control is caused by differences in access speed between main memories. In recent large scale computer systems, the number of tightly coupled processors increases year by year, and the main memory capacity increases in proportion to the number. As a result, a situation in which the connection locations of the processors and various regions in the main memory are not uniform often occurs. For example, a processor can access region of the main memory at one high speed, while the processor accesses another region at a low speed. For another processor, the access speed to each of such regions is in reverse relation.
As mentioned above, the memory management mechanism which obtains good results in the conventional single processor system is often a bottleneck of performance of the multiprocessor system.