In one conventional arrangement, the resources of a distributed computing system are shared among multiple users. The resources are shared, using virtualization and/or other (e.g., physically-based) techniques, in accordance with usage policies derived from user service agreements. In this conventional arrangement, such usage policies are either set in a centralized fashion by a centralized control mechanism remote from an individual respective computing node in the system, or in a localized fashion by respective localized control mechanisms at each respective computing node, but enforcement may take place at the local computing nodes. In this conventional arrangement, software processes, such as, virtual machine virtual switching (vSwitch) processes, are employed in these mechanisms to control the interaction of virtual machines with various infrastructure components in the system.
Unfortunately, the use of such conventional mechanisms and/or software (e.g., vSwitch) processes may result in excessive, inconsistent, and/or significantly fluctuating central processing unit (CPU) overhead in the computing nodes in the system. This may adversely impact CPU and/or computing node performance (e.g. increased latency and latency variance aka jitter). Additionally, as network bandwidth, network transmission speed, services provided, and/or the number of computing nodes and/or policies in the system increase, it may be difficult to scale the use of such conventional mechanisms and/or software processes, without resulting in undesirably large increases in virtualization processing overhead, risk of network transmission losses, and/or processing latencies.
The above conventional arrangement suffers from additional disadvantages and/or drawbacks. For example, the above conventional system may not be able to provide real time or near real time fine granularity for quality of service adjustments to be made to, and/or provide statistically accurate visibility of workloads and/or resource utilizations, as the workloads and/or utilizations change in and/or among the computing nodes. This is especially true in cases where the adjustments to and/or visibility into such workloads and/or utilizations are to be accomplished on a per user/workload basis in adherence to the user service agreements. Additionally, in this above conventional arrangement, there is no contemplation of integration or close coupling of security processes in the system's infrastructure with security processes in the system's compute and/or storage nodes. These additional disadvantages and/or drawbacks may limit the functionality and/or efficiency of this conventional arrangement, and/or increase its complexity and/or cost to operate and/or implement.
A further drawback of this conventional arrangement is that a significant amount of low level programming (e.g., of many disparate interfaces at each of the system's individual nodes) may be required to program the nodes' individual behaviors to try to make them conform and/or be consistent with, and/or implement, the policies and/or user agreements. This problem can be exacerbated by the different types of infrastructures that may be involved (e.g., compute, network, storage, energy, security resources, etc.), set independently (e.g. via separate scheduler/management mechanisms), and/or may be in conflict or sub-optimal in their operation and/or utilization in the platforms and/or in other shared infrastructure components. As can be readily appreciated, coordinating the programming of these interfaces to make them consistent with the policies and/or service agreements can be quite challenging, especially if, as is often the case, the system's users, nodes, applications, virtual machines, workloads, resources, policies, and/or services change frequently (e.g., as they are added or removed from the system).
One proposed solution that involves processing in hardware of network packets is disclosed in Peripheral Component Interconnect (PCI) Special Interest Group (SIG) Single Root Input/Output Virtualization (SR-IOV) and Sharing Specification Revision 1.1, published Jan. 20, 2010 (hereinafter, “SR-IOV specification”). Unfortunately, this proposed solution effectively eliminates the ability of vSwitch processes to be able to directly affect hardware processed packets. This eliminates the ability to add local control, services, and/or polices to be coordinated with the virtual machine manager and/or vSwitch. This reduces the processing flexibility and/or services in this conventional arrangement, and/or may involve use of SR-IOV hardware to provide all such services (which may be unrealistic).
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, it is intended that the claimed subject matter be viewed broadly.