As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
FIG. 1 illustrates a conventional converged network environment 100 in which two storage arrays 102a and 102b are accessed by an information handling system 110 that is configured as a server having two converged network interface controllers (C-NICs), also known as converged network adapters (CNAs). As shown, the two CNAs of the server 110 access the storage arrays 102a and 102b using data center bridging exchange (DCB) protocols 106a and 106b and respective top-of-rack (ToR) fibre channel forwarder (FCF) switches 104a and 104b across respective data paths 108a and 108b. In FIG. 1, server 110 executes a Windows Server® (WS) 2012 operating system that includes built-in capability for network interface controller (NIC) teaming to provide fault tolerance on network adapters, and this NIC Teaming also allows the aggregation of bandwidth from multiple CNAs by providing several different ways to distribute outbound traffic between the CNAs. In this regard, network traffic can be spread according to a hash of the source and destination addresses and ports (IP hashing), or it can be spread according to the originating port for virtual machines. Specifically, conventional NIC Teaming has two modes of configurations:
Switch-independent modes. These algorithms make it possible for team members to connect to different switches because the switch doesn't know that the interface is part of a NIC team at the host. These modes do not require the switch to participate in the NIC teaming.
Switch-dependent modes. These algorithms require the switch to participate in the NIC teaming. Here, all interfaces of the team are connected to the same switch.
In the conventional network environment 100 of FIG. 1, scenarios can exist where a hypervisor (Hyper-V®) enabled WS 2012 server 110 has two converged network adapters (CNA) allowing LAN and SAN traffic on the same CNA, with each NIC being connected to a respective different network switch 104a or 104b, and with all the components (server 110, storage 102 and network switches 104) supporting IEEE 802.1-based Data Center Bridging (DCB) standards (which may include Priority-based Flow Control “PFC”, Enhanced Transmission Selection “ETS”, Data Center Bridging eXchange “DCBX”). In such a scenario, switch-dependent NIC teaming mode will not work as each CNA is connected to a different switch. If switch-independent NIC teaming is enabled using one of the available load balancing mode algorithms such as “Hyper-V Port” as illustrated by GUI 200 of FIG. 2, traffic is load balanced based on source virtual port ID associated with a virtual machine (VM).
However, if one VM starts generating a large amount of LAN traffic, traffic will pass through only one physical NIC of the Hyper-V® enabled host 110, which can choke one heavily utilized data path 108b to one switch 104b as shown. Even with a selected IP hash option, the evenness of traffic distribution depends on the number of TCP/IP sessions to unique destinations, and there is no benefit for bulk transfer between a single pair of hosts. In such a case, when DCB-enabled switch 104b detects congestion on queue for a specified priority, it starts sending priority flow control (PFC) pause frames 109 for the 802.1p traffic across the heavily-utilized data path 108b to the sending DCB-enabled WS 2012 server 110 as shown to ensure lossless flow for storage traffic by instructing the server 110 to stop sending the specific type of storage traffic that is causing the detected congestion. In this way, the PFC pause frames 109 are used to ensure that large amounts of queued LAN traffic do not cause storage traffic to be dropped However, at the same time a second NIC of the server 110 that is transmitting data to the other switch 102a remains less utilized.