Systems with shared computing resources often require resource allocation and control. Current resource management schemes typically use locks to guarantee exclusive access to resources. For example, mutex, a system primitive for resource locking, is available in computer operating systems such as Windows and POSIX. Multiple threads can request for a mutex associated with a shared resource. The operating system assigns the mutex to the thread with the highest priority, which will gain exclusive access to the resource. Once the highest priority thread completes, the mutex is reassigned to the next highest priority thread and so on.
The priority based scheme can sometimes lead to thread starvation. The effect is especially pronounced in some systems that allow automatic priority adjustment. In some systems, if a first thread is holding a mutex and a second higher priority thread is waiting for the same mutex, the first thread will inherit the higher priority of the waiting thread. If the high priority threads take a long time to complete, any thread that has lower priority will be starved. To alleviate the thread starvation problem, some systems may automatically drop the priority of the thread holding a mutex, which can lead to repeated cycling between lower and higher priority and even deadlock. Also, in some circumstances it is important that requests to use a shared resource, such as a critical portion of code, be serviced in the order received, as opposed to based on the (potentially inherited) priority of the respective requesting threads.
Therefore, there is a need for a way to provide access to a shared computer resource, such as a critical section of code, that does not suffer from the priority inheritance, starvation, and deadlock problems that can arise when a mutex or similar locking mechanism is used and which can ensure that requests are serviced in the order received.