Computer systems, and more specifically networked computer systems are a center point of modern society. Advances in production and miniaturization have permitted the production of increasingly faster processors and larger data storage.
Commerce, and indeed business in general is highly reliant on networked computer systems for nearly all aspects of business activity, such as but not limited to offering products for sale, maintaining account records, analyzing data, etc. . . . . Yet, the needs for resources may, and often does, change from time to time.
The changing needs for resources has been an issue. Initially, a physical hardware system would be established with a specific processor, memory, buss, peripherals and operating system. This specific system would then support a specific type of application run within the specific operating system. At a point in time when a new or different application was desired that required a different operating system, or perhaps even a different type of underlying hardware system a new physical system would then be assembled and provided. The original physical machine might or might not be recycled.
Virtual machines have helped ease this waste. Simply put, a hypervisor is a piece of computer firmware, hardware or software that creates and runs virtual machines. To the executing application or operating system, the virtual machine is essentially indistinguishable from a physical hardware machine, i.e., bare metal system. However, the hypervisor permits multiple virtual machines to exist on one bare metal system, and in many situations permits different types of virtual machines to co-exist.
The computing system on which a hypervisor is running one or more virtual machines is typically identified as the host machine or system. Each virtual machine is typically called a guest machine. The hypervisor presents each guest operating system with virtualized operating platform and manages the execution of the guest operating systems.
As the hypervisor is virtualizing the operating platform, the underlying bare metal system becomes less of an issue for if a change of the operating platform is desired or required, the hypervisor can provide a new virtualized operating platform.
For the environment of cloud computing where physical resources are shared among virtualized systems, the use of virtual machines can greatly improve the utilization and allocation of physical resources. In other words several virtual machines may be established by one hypervisor on one bare metal system. By way of the hypervisor, each virtual machine utilizes the resources of the underlying bare metal system when and as their applications require, so as one virtual system is idle another may be active. Of course if all virtual machines are attempting to be active at the same time the overall performance may be degraded if the combined resource requests are in excess of the physical resources of the bare metal system.
As each virtual machine exists and operates as if it were a physical system, in general these virtual machines must conform to the traditional tenants of networking and system interconnection.
The Open System Interconnection model, also referred to as the Open Source Interconnection model or more simply the OSI model, is a product of the Open System Interconnection effort at the International Organization for Standardization, and more specifically is a prescription characterizing and standardizing the functions of a communication system in terms of seven abstraction layers of concentric organization—Layer 1 the physical layer, Layer 2 the data link layer, Layer 3 the network layer, Layer 4 the transport layer, Layer 5 the session layer, Layer 6 the presentation layer, and Layer 7 the application layer.
Each layer is generally known as an N Layer. At each layer, two entities, i.e., N-entity peers, interact by means of the N protocol by transmitting protocol data units or “PDU”. A Service Data Unit “SDU” is a specific unit of data that has been passed down from one layer to another, and which the lower layer has not yet encapsulated into a PDU. Moreover the PDU of any given layer, e.g. Layer N, is the SDU of the layer below, Layer N−1. In other words, the SDU is the payload of a given PDU.
Transfer of an SDU between layers is therefore a matter of encapsulation and is performed by the lower layer in adding appropriate headers and or footers to the SDU such that it becomes a PDU. These headers and or footers are part of the communication process permitting data to get from a source to a destination within any network.
Briefly, Layer 1, the physical layer defines the electrical and physical specifications of the device and the communication medium, e.g., copper cable, optical cable, wireless, etc. . . . . Layer 2, the data link layer, provides the functional and procedural means to transfer data from one entity to another, and to possibly correct errors that may occur in the physical layer. The data is arranged in logical sequences known as frames.
Layer 3 is the network layer and provides the functional and procedural means of transferring variable length data sequences from a source on one network to a destination host on a different network. Routers operate at this layer and make the Internet possible by properly handling the interconnections and handoffs between different networks. Layer 4 is the transport layer responsible for data transfer between end users and the reliability of a given link through flow control, segmentation/desegmentation and error control.
Layers 5, 6 and 7 as the Session, Presentation and Application layers are successively higher and closer to the user and subsequently further and further away from the physical layer. The greater the number of transfers between layers to accomplish any given task, the greater the complexity, latency and general opportunity for error.
Indeed within a typical local area network (LAN), wherein the systems are indeed part of the same network, the communication of data transfer is generally accomplished at the Layer 2 level. However, when joining one LAN to another, or to a wide area network (WAN), the addresses of the LAN may be meaningless or perhaps even duplicative of other addresses in other LANs and as such the translation to Layer 3 is the generally accepted method for maintaining harmony in communication.
While this is a viable option, and indeed the existence of the Internet demonstrates overall functionality, it does often come with overhead costs and burdens of complexity. For example, whereas a database within a LAN may be communicated with via Layer 2 and thereby enjoy enhanced integration as a networked component, accessing a similar database over Layer 3 requires Internet Protocol “IP” address translation, or other similar transformation which by it's vary nature requires the originating system to be configured for, and perhaps engage in appropriate measures for proper identification, and addressing of data to and from the remote database as would not be otherwise required with a LAN based database. For example the LAN systems may be on one network or VLAN and the remote database is part of another network or VLAN—the differences requiring at the very least a router to adjust and account for the differences in network identity and configuration.
These issues apply to virtual machines as supported by a hypervisor as well. Moreover, one or more LANs as established by a plurality of virtual machines interacting with each other and or with remote physical machines adhere to the same principles for data traffic and communication.
This can be a significant issue with respect to cloud computing environments as often times LANs are configured with default options—which is to say that LAN #1 for company A has settings that are identical to LAN #2 for company B because each was set up with the same default configurations. Although they may each have the same configurations, it is essential that each company receive the proper data traffic. In other words, although their configurations may be identical, it is essential that data traffic for each company be properly segregated. A failure in segregation may result in system inoperability, data loss, breach of confidentiality, etc. . . . .
Some attempts have been made to further extrapolate upon the virtualization provided by hypervisors. Indeed U.S. Pat. No. 8,484,639 to Huang, et al. presents an environment where a pseudo hypervisor is established on top of a first hypervisor. Virtual machines for each customer can then be segregated so that the virtual machines for company A are provided by pseudo hypervisor A while the virtual machines for company B are provided by pseudo hypervisor B.
As noted by Huang, “[b]y implementing a pseudo-hypervisor layer, the provider of virtualization software is able to effectuate management capabilities unavailable with a single level of virtualization due to the lack of visibility into the underlying virtual machines layer. The pseudo-hypervisor layer is visible to and is managed by the provider of virtualization software. Therefore, the provider of virtualization software can provision or migrate virtual machines among pseudo-hypervisors and within a single pseudo-hypervisor. By migrating applications to virtual machines controlled by a common pseudo-hypervisor, the provider of virtualization can ensure that applications are co-located on the same underlying hardware. Ensuring that applications are co-located on the same underlying hardware decreases input/output latency and increases bandwidth.”
However this migration between pseudo hypervisors of the Huang system is a localized event—i.e., it is migration from one pseudo hypervisor within the Huang system to another pseudo hypervisor within the Huang system. Issues of overlapping configurations are apparently managed by the segregation to different pseudo hypervisors and the management of packet routing to these different pseudo hypervisors.
Moreover, although Huang clearly teaches co-location on underlying hardware to decrease input/output latency and increase bandwidth for those applications, Huang does not teach migration of virtual machines between pseudo hypervisors for high availability, maintenance or to achieve burst availability of a larger resource pool. There is also no mention or teaching in Huang of moving pseudo hypervisors in their entirety between hypervisors for these same reasons. And Huang does not address network management isolation or segmentation of the pseudo hypervisors.
Indeed, according to Huang, if company A and company B have identical configurations, segregation between them is only possible because the virtual machines of each are provided by different pseudo hypervisors, hypervisor A and hypervisor B. Any attempt to co-locate the virtual machines of A and B upon the same pseudo hypervisor would result in Huang system failure.
Moreover, although cloud computing does provide an improvement in many ways over previous options for expansion and contraction of resources to meet needs, it is not without it's own set of challenges and difficulties.
It is to innovations related to this subject matter that the claimed invention is generally directed.