The present disclosure relates generally to information handling systems, and more particularly to a system for providing load balancing in an information handling system network using programmable data plane hardware.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Some information handling systems such as, for example, front end servers, are used as load balancers to distribute workloads across multiple computing resources such as, for example, back end servers. For example, front end servers may receive resource requests from client systems and attempt to optimize back end server use, maximize throughput, minimize response times, avoid overload of any particular back end server, and/or provide other load balancing benefits known in the art. However, the use of front end server as load balancers raises a number of issues. For example, the positioning of front end servers can lead to inefficient traffic paths sometimes referred to as “traffic tromboning”. Traffic tromboning can occur when front end servers are positioned at the edge of a network. For example, traffic that enters that network through one or more switches to reach the front end server may then be load balanced by the front end server and routed back through at least one of those switches again to reach the back end server.
Furthermore, the use of front end servers to perform load balancing results in a number of other inefficiencies. For example, traffic routed through a front end server must enter that front end server through its Network Interface Controller (NIC), be load balanced by the central processing unit (CPU) executing software stored on a memory device, and then be routed back through the NIC to the back end server, all which introduce latencies in the load balancing process and utilize CPU cycles of the CPU in the front end server for networking or networking services that could otherwise be used for non-networking workloads that the front end server is responsible for or capable of performing. Further still, the use of multiple front end servers as load balancers also requires those front end servers to synchronize their states using protocols such as Zookeeper, which can create further latencies.
Accordingly, it would be desirable to provide an improved load balancing system.