Along with the innovative advances in modern LSI (Large Scale Integration) technology, various types of information processing devices and information communication devices have been developed and sold, and have come to permeate our daily lives. With these types of devices, a variety of processing services are provided by having predetermined program code executed by a CPU (Central Processing Unit) or some other processor under an execution environment provided by an operating system.
However, in program design, it is sometimes useful to let a plurality of flows of control (sometimes referred to as “tasks”) to exist in a program. As used herein, the term a plurality of flows of control refers to the fact that, as shown in FIG. 6, a plurality of “points currently under execution” exist in a program's process flow, or in other words in the flowchart. In the example shown in the same figure, at a certain point T1, step S1 and step S3 are executed in flow I and flow II, respectively. Then, as opposed thereto, at the next point T2 after some time has passed, step S2 and step S3 are executed in flow I and flow II, respectively.
In general, when a plurality of flows exist, and each of them manipulates data common to each, the consistency of data cannot be maintained unless they are synchronized. As used herein, common data includes task lists, conditional variables and the like. Conditional variable is a concept in which conditions a task has are made abstract, and is used as a means for communicating when a task should transition to a waiting state or when a task should return to an executable state.
For example, in a case where two flows of control B and C exist, a case where each flow of control performs the following process will be considered.    Procedure 1: Read the value of variable x    Procedure 2: Substitute into variable x a value in which 1 is added to the read value
When the two flows B and C each perform the process above once, the same process ends up being performed twice. Therefore, the value of variable x should increase by two. However, when flow B and flow C coincide as follows, the value of variable x only increases by 1.    (1) Flow B executes procedure 1    (2) Flow C executes procedure 1    (3) Flow B executes procedure 2    (4) Flow C executes procedure 2
In order to prevent such an operational error, there is a need to prohibit the referencing and updating of data from other flows during a sequence of referencing and updating operations (transactions) performed in a certain flow.
In the example above, because flow C referenced and updated variable x before the sequence of operations in flow B, namely procedure 1 and procedure 2, was completed, a problem in which the consistency of data was lost occurred.
Such operations as procedures 1 to 2 described above could also be considered parts of a program which cannot be referenced simultaneously by a plurality of tasks, and will hereinafter be referred to as “critical sections.” Also, prohibiting the referencing and updating of data by other tasks in order to solve the problem of data consistency in critical sections can also be called “exclusive access control.” In other words, while a series of processes is performed with respect to some data in a flow of control, the act of operating on the same data by another flow of control is delayed, that is, operation of particular data is performed exclusively.
The present inventors consider it preferable that an exclusive access control mechanism have the following features.    (1) There is no possibility of having a high urgency process delayed by a low urgency process (of an occurrence of a reversal in priority).    (2) Exclusive access control can be performed even between a plurality of task sets whose scheduling is performed in accordance with distinct policies.
Of the features above, the reason (1) is necessary is obvious to those skilled in the art. Also, (2) is necessary in simultaneously running a plurality of operating systems on one computer system or in employing distinct scheduling methods for each of a plurality of task sets each having distinct characteristics.
For example, for exclusive access control between tasks, “mutex mechanisms” and “semaphore mechanisms” are used. However, in these methods employing such exclusive access control lies the problem that there is a possibility of having high priority processes delayed by low priority processes, that is a possibility of an occurrence of a reversal in priority, and therefore feature (1) above is not satisfied.
As a method for mitigating such a problem of reversal in priority, priority inheritance protocols (for example, see “Priority Inheritance Protocols: An Approach to Real-Time Synchronization” a paper by Lui Sha, Ragunathan Rajkumar and John P. Lehoczky, IEEE Transactions on Computers, Vol. 39, No. 9, pp. 1179-1185, September 1990) are proposed. Priority inheritance protocols refer to a method in which, in a case where a high priority task is trying to operate on the same data while a low priority task is executing a series of operations, the priority of the low priority task is temporarily raised to the same priority as the high priority task.
The operation of priority inheritance protocols is illustrated in FIG. 7. In this case, if high priority task B tries to start operating on some data while low priority task A is operating on the same data, a delay is inevitable. At this point, the priority of task A is temporarily raised to the same level as task B. Thereafter, even if task C that has a priority lower than task B but higher than task A tries to initiate execution, since the priority of task A is raised higher than task C, execution of task A is never interrupted. Then, after task A is finished, task B, while maintaining the consistency of the data, is able to initiate operation on the data without being interrupted by task C of a lower priority than itself.
However, this priority inheritance protocol is predicated on the idea that the scheduling of all tasks is performed in accordance with a common criterion, namely priority. Therefore, it is difficult to apply it to a system in which a plurality of scheduling co-exists (especially, a system in which a task set that does not perform scheduling in accordance with priority exists) such as a task execution environment in which a plurality of operating systems operate simultaneously on a single computer system, for example. In other words, feature (2) above is not satisfied.
As a method that does not entail these problems, the scheduler-conscious synchronization method may be given as an example (for example, see “Scheduler-conscious synchronization” a paper by Leonidas I. Kontothanassis, Robert W. Wisniewski, Michael L. Scott, (ACM Transactions on Computer Systems, Volume 15, Issue 1, 1997)). This method limits the effect a low urgency process has on a high urgency process by prohibiting other tasks from being dispatched during the execution of a critical section. Specifically, the delay time of a high urgency process is suppressed to below the maximum critical section execution time. Further, this method is predicated only on the presence of a mechanism for prohibiting dispatch.
However, since this method is not one that takes into account application to a system in which a plurality of scheduling coexists, under such a task execution environment, there still remains the possibility that a high urgency process would be delayed by a low urgency process. In other words, it does not satisfy feature (2) above.
In addition, as a method that satisfies both features (1) and (2) above, the non-blocking synchronization method may be given as an example (for example, see “Non-blocking Synchronization and System Design,” a paper by Michael Barry Greenwald (Ph.D. Thesis, Stanford University, 1999)). However, in order to apply it, a special hardware is required, and an increase in cost is incurred.