In a typical cloud-based data center, a large collection of interconnected servers provides computing and/or storage capacity for execution of various applications. For example, a data center may comprise a facility that hosts applications and services for subscribers, i.e., customers of data center. The data center may, for example, host all of the infrastructure equipment, such as networking and storage systems, redundant power supplies, and environmental controls. In most data centers, clusters of storage systems and application servers are interconnected via high-speed switch fabric provided by one or more tiers of physical network switches and routers. More sophisticated data centers provide infrastructure spread throughout the world with subscriber support equipment located in various physical hosting facilities.
Packet flows in a data center are often made up of one or more “flowlets,” where one flowlet is a burst of packets sent in rapid succession, and the flowlets may be separated by relatively long periods of time with no traffic for the packet flow. Accordingly, when a packet flow is originally assigned to a path through the switch fabric, the path may appear to have a low utilization only because the time at which the utilization of the path is measured represents a time between flowlets of a corresponding packet flow. Thus, if a new packet flow is assigned to the same path, and if flowlets of the two packet flows occur at the same time, the bandwidth for the path may be exceeded, resulting in dropped packets.