The various embodiments described herein relate to the field of computer technology, particularly to the area of virtualization, wherein computer resources are emulated and simulated by a hypervisor system in order to offer the possibility of replacing computing resources such as storage, applications, and computational resources of a workstation with virtual computing resources. These virtual resources are backed by the hypervisor using real physical resources that are available on network or local systems. The hypervisor can multiplex physical resources for more efficient use of computing facilities. More specifically, method and system embodiments are provided for operating a hypervisor system in a hypervisor system environment comprising at least one guest system having an operating system, wherein an external event (e.g., from system timers, disks input/output (I/O), power-off signals, sensing key-presses, etc.) is signalized from the hypervisor system to a respective guest system.
“Virtualization” in virtualized environments is a general and broad term that refers in the context of computer science to the emulation and simulation of computer resources. Whereas abstraction usually hides details, virtualization pertains to creating illusions. Instead of hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with such resources, these are usually emulated and simulated in order to behave in a virtualized environment in the same way as they would on native hardware.
The interfaces and resources of a virtualized system are mapped onto the respective interface and resources of a real physical system.
FIGS. 1A and 1B schematically show the main components in a hypervisor environment for two different implementation types. The primary components include the hardware 10, a host operating system 12, a hypervisor software module 14, a guest operating system 16, and guest processes 18. As depicted in FIGS. 1A and 1B, a component can use interfaces and resources, particularly physical central processing units (CPUs) 15 and virtual (guest) CPUs 17.
Typical resources are processors (depicted as CPUs in FIGS. 1A and 1B), processor time (not depicted), and memory (not depicted). For example, in FIG. 1A, the hypervisor 14 uses hardware interfaces such as CPU operation codes (opcodes) and hardware resources such as installed memory. The guest operating systems 16 use hypervisor interfaces and hypervisor resources. The guest programs invoke the respective different processes 18 using interfaces and resources of the respective guest operating systems 16.
FIGS. 1A and 1B illustrate two different types of prior art hypervisor implementations. The first type—shown in FIG. 1A—is to have the hypervisor 14 use the hardware interfaces 15 directly. This variant provides virtual machines 17 as interfaces to its guest operating systems 16. The guest operating systems 16 use these virtual machines 17 for their processes (applications or programs).
The second implementation type for hypervisor systems is outlined in FIG. 1B. In this scheme, the real physical hardware 10 is driven by a host operating system 12. A hypervisor 14 is a program that uses interfaces from the host operating system 12. This scheme is used in prior art hypervisor systems such as VMWARE, KVM and others. A hypervisor 14 then provides virtual machines 17 to its respective guest operating system 16. A guest operating system 16 uses one or more virtual machines 17 for its processes (applications or programs).
Signalization mechanisms, such as interrupt handling and the various other embodiments described herein, may occur at the hypervisor/guest operating system interface (see arrow 30 in FIG. 2) and thus may work on both types of hypervisors. In order to increase clarity of the disclosure, a simplified component stack will be used that describes both implementation types. More specifically, instead of using the terms “hypervisor” or “host operating system plus hypervisor,” the term “host” is used for both implementation types.
FIG. 2 illustrates the simplified component stack.
Inter-system signalization of runtime events is performed in prior art either using interrupt handling or polling. Typical interrupt uses include system timers, disks I/O, power-off signals, sensing key-presses, etc.
FIG. 3 illustrates the most important state changes of host and guest code that run on a processor. The prior art interrupt delivery is a standard way of notifying an operating system about external events, as briefly mentioned above. Control flows that can run independently on different physical processors are separated by dotted boxes in FIG. 3.
With the prior art interrupt delivery, the program flow is interrupted, the status information is saved, and the control flow continues at a predefined location. An interrupt handler coordinates the execution of an interrupt routine. In hosted environments there are two types of interrupt handlers. Host interrupts are handled by a host interrupt handler, which is part of the host code. Guest interrupts are handled by the guest interrupt handler, which is part of the guest operating system.
Both handler modules are part of one of the controlling programs (guest operating system or host program) and handle the respective prior art signaling. The interrupt handler runs and triggers actions in the operating system. At the end of the interrupt handler, the control flow returns to the interrupted code and continues it.
On real hardware, I/O devices use a physical processor interface to trigger the interrupt. The physical processor saves the old instruction pointer and sets the instruction pointer to the interrupt handler.
In prior art virtualized environments, there are at least two levels of notifications, including notifications for the host program and notifications for the guest operating systems. Interrupts can be used for both types of notifications.
The host interrupt works almost identically to the physical hardware interrupt. A device or processor component triggers the host interrupt using a processor interface. The processor then saves the old instruction pointer and changes the control flow to the host interrupt handler. The host interrupt handler processes the notification and subsequently returns to the interrupted instruction.
The implementation of guest interrupts is a different one in prior art. The host program is responsible for guest interrupts. The host program decides if and when a guest receives an interrupt for notification.
With reference again to FIG. 3, there are steps in the control flow which are specific to a prior art hosted (virtualized) environment. The processor can execute host code 35 and 36 and guest code 33 and 34. Step 2 in FIG. 3 illustrates the moment when the host code starts/continues a guest by letting the processor execute guest code. At some point in time, the processor stops executing guest code and executes host code instead (step 1 in FIG. 3). The transition can be initiated voluntarily by the guest or involuntarily by an event. The transitions in step 1 and step 2 in FIG. 3 are quite common and happen regularly in hosted environments. A host interrupt is one of the events that triggers step 1 in FIG. 3.
If the host program needs to notify a guest, it emulates an interrupt to the guest. The host program chooses an eligible guest CPU for interrupt delivery. Subsequently, the chosen guest CPU is prepared for interrupt delivery. If the guest CPU is currently running, the host must stop this guest CPU (step 5 in FIG. 3). When the guest CPU is no longer running, the host eventually gains control (step 1 in FIG. 3) and then delivers the emulated interrupt.
In order to emulate an interrupt, the host saves the instruction pointer of the guest in the same way as physical hardware would do. Afterward, the instruction pointer is set to the address of the guest interrupt handler 31, 32 (step 4 in FIG. 3), and the guest CPU is restarted. Eventually, the guest interrupt handler 31, 32 finishes and returns to the address that is specified by the saved instruction pointer (step 3 in FIG. 3).
Another alternative used in prior art for signalization purposes is the so-called “polling procedure”. During polling, a guest is active and requests status information from the host in predefined time periods. However, such polling technique is disadvantageous in that it leads to significant loss of computing resources. Furthermore, such polling technique generates delays when external events are processed.
It is thus an objective of the present disclosure to provide improved signalization within a virtual computer system.