The speed and efficiency of many computing applications depends upon the availability of processing resources. To this end, computing architectures such as the “virtual machine” design, developed by International Business Machines Corporation, share common processing resources among multiple processes. Such an architecture may conventionally rely upon a single computing machine having one or more physical controllers, or central processing units (CPUs). The CPUs may execute software configured to simulate multiple virtual processors. Each virtual processor may embody an independent unit of execution, or thread.
A CPU that can concurrently maintain more than one such unit or path of execution is a called a multithreaded CPU or processor. Each path of execution is called a thread. In a multithreaded CPU system, each thread performs a specific task that may be executed independently of the other threads. For efficiency purposes, each thread may share some physical resources of a CPU, such as buffers, hardware registers and address translation tables. This hardware architecture mandates that all threads of a multithreaded CPU execute within the same virtual address space. For instance, if a CPU supports two threads, both threads must execute within the same partition or hypervisor, as discussed below.
A partition may logically comprise a portion of a machine's CPUs, memory and other resources, as assigned by an administrator. As such, the administrator may share physical resources between partitions. Each partition will host an operating system and may have multiple virtual processors. In this manner, each partition operates largely as if it is a separate computer. An underlying program called a “hypervisor,” or partition manager, may use this scheme to assign and dispatch physical resources to each partition. For instance, the hypervisor may intercept requests for resources from operating systems to globally share and allocate them. If the partitions are sharing processors, the hypervisor allocates physical CPUs between the virtual processors of the partitions sharing the processor.
In an effort to increase the speed of conventional (non-multithreaded), partitioned environments where partitions are sharing processors, system designers commonly implement yield calls. Yield calls generally represent programable attempts to efficiently distribute CPUs among partitions sharing processing availability. For instance, an operating system executing a thread may issue a yield call to a hypervisor whenever the thread spins in a lock or executes its idle loop. Such an idle thread may have no work to perform, while a locked thread may “spin” as it waits for the holder of the lock to relinquish it. In response to the yield call, the thread may enter an idle state, while the hypervisor reallocates the CPU.
More particularly, a virtual processor that is spinning on a lock held by another virtual processor may initiate a yield-to-active call. In response to the yield-to-active command, the virtual processor may enter an idled state and relinquish its CPU. The hypervisor may reallocate the yielded CPU to the next virtual processor presented on a dispatch schedule of the hypervisor.
Should a thread be in an idle loop, the operating system executing the thread may make a timed-yield. Such a yield call may cause the operating system to relinquish its CPU for a period specified within the yield call. The duration may correspond to an interval of time where the operating system running the thread does not require the CPU that has been dispatched to it. As such, the timed-yield allows the CPU to be utilized by another virtual processor until a time-out event registers. Of note, the virtual processor may be in a different partition. The time-out may coincide with the expiration of the specified interval, at which time the hypervisor will end the yield operation and dispatch a CPU back to the operating system that originally executed the thread.
While such yield applications may succeed in improving the efficiency of some processing systems, known yield processes are not designed for multithreaded CPU environments. Subsequently, yield processes often do not conform to the operating rules and hardware requirements specific to the multithreaded CPU environments. Namely, known yield processes fail to address the requirement that all thread executing on a multithreaded CPU must execute within the same virtual space. Furthermore, conventional yield processes do not regard the independent execution of such threads, nor do they offer a means of monitoring and coordinating thread execution. Consequently, there is a need for an improved manner of managing the allocation of physical computing resources within a multithreaded CPU environment.