Processes are entities that are scheduled by the operating system to run on processors. In a multithreaded program, different threads may execute simultaneously on different processors. If the processes executing the different threads of a program are scheduled to execute simultaneously on different processors, then multiprocessing of the multithreaded program is achieved. In addition, if multiple system processes are scheduled to run simultaneously on multiple processors, the operating system has achieved multiprocessing.
Generally, in all process scheduling at least four types of contenders compete for processor access:
1) Processes waking up after waiting for an event; PA1 2) Work needing to be done after an interrupt; PA1 3) Multiple threads in the operating system; PA1 4) Multiple threads in user processes.
One problem with existing implementations of multithreaded systems is that a bottleneck occurs when multiple threads must wait at a final, central point to be dispatched to a processor. This scheme may use a lock manager to schedule processes. The result requires a process to wait in line for a processor. Inefficient scheduling may occur if a lower priority process is in the queue ahead of a higher priority process. Thus, the result of the lock manager is to reduce a multithreaded process into a single thread of execution at the point where processes are dispatched.
Another problem related to existing implementations is that efficiency is reduced because of overhead associated with processes. In a classical Unix.TM..sup.1 implementation, only one kind of process entity can be created or processed. A process consists of system side context and user side context. Because a classical Unix implementation has only one creating entity, the fork, the system processes contain more context information than is actually required. The user context (i.e. user block and various process table fields) is ignored by the system and not used. However, this context has overhead associated with memory and switching which is not ignored and thus consumes unnecessary resources. FNT 1. Unix is a trademark of AT&T Bell Laboratories
Another problem with existing implementations is that when an interrupt occurs, the processor which receives the interrupt stops processing to handle the interrupt, regardless of what the processor was doing. This can result in delaying a high priority task by making the processor service a lower priority interrupt.
Another problem can occur when an implementation has multiple schedulers in a tightly coupled multiprocessing environment. Each of the schedulers controls a type of process and as such all schedulers are in contention for access to processors. Decentralizing the run queue function has overhead penalties for the complexity in managing locally scheduled processes.