A datacenter is a facility that physically houses various equipment, such as computers, servers (e.g., web servers, application servers, database servers), switches routers, data storage devices, load balancers, wire cages or closets, vaults, racks, and related equipment for the purpose of storing, managing, processing, and exchanging data and information between nodes. A node is typically either a client or a server within the data center. Datacenters also provide application services and management for various data processing functions, such as web hosting internet, intranet, telecommunication, and information technology.
Datacenters are a unique environment because all the machines and services provided to clients are within a controlled and well-monitored environment. Additionally, datacenters are not static. In other words, datacenters are constantly growing to add more machines, services, and users. Therefore, scaling datacenters to increase performance due to the growth of services and users is an ongoing effort.
Conventionally, when scaling datacenters to achieve more performance, two approaches are used. Vertical scaling involves using larger machines (i.e., computers, servers) by adding more central processing units (CPUs) to one machine or upgrading machines to include faster CPUs. For example, a datacenter administrator, whose machines currently include 32 CPUs, may purchase 32 more CPUs to make a 64 CPU machine. An alternate method for scaling datacenters is known as horizontal scaling. Horizontal scaling involves adding more physical machines to the datacenter. More specifically, horizontal scaling involves adding many smaller machines and working to balance the load on each of these smaller machines within the datacenter. For example, if a datacenter currently holds 50 machines, each with one or two CPUs, then horizontal scaling would involve adding another 50 machines, again with one or two CPUs, to the datacenter.
Typically, in order to address the load balancing aspect of horizontal scaling, load balancing switches are used in the middle tier of the datacenter network. The load balancing switches are capable of making intelligent decisions regarding which servers are best suited to handle requests from clients by inspecting the network traffic. For example, if a client sends a packet to a particular server, a load balancing switch intercepts and inspects the packet, and based on the amount of traffic on the various servers in the datacenter and the packet contents, forwards the packet to an appropriate server. Typically, the load balancing switches are not transparent to the datacenter network and need to be reconfigured each time servers are added to the datacenter.
As noted above, load balancing switches need to be able to inspect network traffic in order to make intelligent decisions regarding where to forward requests. Consequently, encryption is not used for security purposes, and users have to rely on the physical security of the datacenter network. In some instances, load balancing switches may include the functionality to decrypt network traffic, inspect packets, and then re-encrypt the packets before forwarding them to a server. In order to perform the decryption and re-encryption of network traffic, the encryption/decryption keys would also be required by the load-balancing switches.