In one conventional arrangement, the resources of a distributed computing system are shared among multiple users. The resources are shared, using virtualization and/or other (e.g., physically-based) techniques, in accordance with usage policies derived from user service agreements. In this conventional arrangement, such usage policies are either set in a centralized fashion by a centralized control mechanism remote from an individual respective computing node in the system, or in a localized fashion by respective localized control mechanisms at each respective computing node, but enforcement may take place at the local computing nodes.
These resources typically include hardware and software resources that provide and/or impart various kinds of processing to packets received by the system, and/or provide other capabilities, such as various services, appliances, and offload processing. Depending upon the configuration of the distributed computing system, the computing nodes to which these resources are assigned, and their respective workloads, configurations, etc., are selected either by the centralized control mechanism or the localized control mechanisms. If a given packet is to undergo multiple kinds of processing by multiple resources, a switch is employed to forward the packet to and among the multiple resources.
Unfortunately, the above conventional arrangement suffers from certain disadvantages and drawbacks. For example, although the processing that is to be imparted to the packets can be individualized on a per-user, per-policy basis, etc., the specific manner in which the policies, processing, and resource configuration/locations are implemented in the system typically is not coordinated in a fashion that meaningfully facilitates or improves system processing efficiency. For example, without such meaningful coordination, resulting traffic and/or processing patterns in the system may result in overuse, underuse, or thrashing of the switch, various resources, and/or certain ports of the switch and/or the various resources. Alternatively or additionally, without such meaningful coordination, traffic may undesirably “bounce” among the switch and/or certain resources, or take an undesirably large number of hops in the network.
The above conventional arrangement suffers from additional disadvantages and/or drawbacks. For example, the above conventional system may not be able to provide real time or near real time fine granularity for quality of service adjustments to be made to, and/or statistically accurate visibility of workloads and/or resource utilizations, as the workloads and/or utilizations change in and/or among the computing nodes. This is especially true in cases where the adjustments to and/or visibility into such workloads and/or utilizations are to be accomplished on a per user/workload basis in adherence to the user service agreements.
A further drawback of this conventional arrangement is that it affords relatively little in the way of processing/policy flexibility and dynamic processing capabilities, for example, depending upon the particular contents of received packets. For example, in at least certain circumstances, it would be useful to be able to modify or adjust the policies, processing, processing order, and/or processing resource configuration/locations that are applicable to and/or to be used in connection with received packets, based upon the particular contents of the received packets. Additional drawbacks of this conventional arrangement include inability to reduce to the extent desirable processing and packet transmission latency and jitter.
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, it is intended that the claimed subject matter be viewed broadly.