Technical Field to which the Invention Belongs
The present invention relates to a process dispatching method, and more particularly to a process dispatching method for use in a multiprocessor system in which each of a plurality of central processing units (CPUs) has a cache memory.
Definition of Terms
Before describing the prior art, key terms used in this specification will be defined.
A process control block (PCB) is the set of information necessary for the execution off a process, and is stored in the main memory unit. Information stored in a PCB includes the priority of the process. The detailed structure of a PCB will be described in detail elsewhere in this specification.
A running state is one state of a process, in which the process is being executed on a processor.
A ready state is another state of a process, in which the process can be immediately executed. A process in a ready state waits for a processor to enter an idle state in a ready queue.
A waiting state is still another state of a process, in which the process is waiting for the occurrence of some event.
A re-dispatch is to bring a process into a running state from a waiting state and to dispatch it again to one of the processors.
A process swap is to bring a process in a running state on a processor into a ready state or a waiting state and to dispatch another process to this processor.
Prior Art
Next will be described the characteristics of a multiprocessor system in which each processor has its own a cache memory. In a multiprocessor system, each processor is provided with a cache memory to facilitate quick memory access. However, the installation of cache memories multiprocessor systems gives rise to the following problems which can be explained with reference to the following hypothetical multiprocessor system and its processing sequence. The hypothetical multiprocessor system has processors X through Z. Processor X through Z have cache memories x through z, respectively. During the first step of the processing sequence, a process A is executed by the processor X. During the second step, process A is brought into a waiting state. During the third step, some process other than A is executed by the processor X. During the fourth step, process A is re-dispatched to the processor Z. At this time, the data used in the processing of process A remains in the cache memory x of processor X.
The aforementioned processing sequence involves the following two problems.
First, since the destination of the re-dispatch of process A is the processor Z, the data a in the cache memory x cannot be utilized. The processor Z which executes process A at the fourth step has to read the necessary data out of the main memory. As a result, the processing of processor Z is slowed down. This problem will be hereinafter referred to as a hit rate decline.
Second, cache cancellation may occur. As processor Z reads data similar to data a out of the main memory during the fourth step, the same data may be present in the cache memories x and z at the same time. The data a in the cache memory x are cancelled when processor Z alters the contents of the data a in cache memory z. When cache cancellation occurs, the processing capacity of the system is reduced.
These two problems could be avoided if the destination of the re-dispatch of process A was processor X, which had executed process A immediately before. Thus in a multiprocessor system where each processor has a cache memory, the method which determines the destination of the re-dispatch affects the performance of the system.
Next will be described the prior art. One example of a method to dispatch a process on the basis of the above-described characteristics of a multiprocessor system is disclosed in the U.S. Pat. No. 5,193,172. According to lines 52 through 53 of the second column of the gazette in which this patent is published, one of the objects of this prior art is to reduce the occurrence of cache cancellation accompanying dispatches.
According to this prior art, real pages of the main memory are divided into a plurality of groups. One processor is allocated to each group. A processor allocated to a group preferentially uses the real pages belonging to this group. To each processor are allocated tasks to be executed preferentially by the processor. To each task are allocated real pages to be used in the processing of the task.
Referring to lines 3 through 13 of the eighth column of the gazette, a re-dispatch is performed using the following procedure according to this prior art. The number of real pages allocated to the task to be dispatched is counted for each of said groups. As a result of the counting, the group to which the greatest number of real pages are allocated is determined, and this task is dispatched to the processor having the priority in the use of this group.
According to the above-described dispatching method, unless there is any change in the real pages allocated to each task, this task is re-dispatched to the processor which was executing it in the past. Therefore, the probability of cache cancellation is reduced.
However, this prior art involves the following problem.
The problem is that, in order to manage the aforementioned groups, many procedures have to be executed. This prior art requires alterations of the aforementioned groups during processing in order to allocate an appropriate number of memory pages to each processor. However, according to the statement from line 20 of the sixth column to line 35 of the seventh column of the gazette, any such group alteration requires the execution of many complex procedures, which moreover have to be repeatedly executed.