Data centers can include a large number of switches directing data (e.g., formatted within a network packet) among a large number of servers. These switches and servers are often mounted within racks, and the data centers can include hundreds, thousands, or more racks.
The interconnect coupling the switches to each other and, therefore, directing data to the servers can be based on a variety of architectures or network topologies. For example, a Clos network includes coupling switches in a multi-stage hierarchy to provide non-blocking functionality such that any input can be provided to any output while reducing the number of ports. A butterfly network includes organizing switches within “ranks” and coupling a switch in one rank with two switches in an adjacent rank. This can result in fewer switches used, but the butterfly network is a blocking network. Thus, different network topologies can provide different advantages and disadvantages.
The switches within the racks can include printed circuit boards (PCBs) having traces, or interconnects, routing ports on the front of the switch (e.g., a port that is accessible from the front side of rack when the switch is within the rack) to a switch application-specific integrated circuit (ASIC) that can route data among the ports and, therefore, among other switches, servers, and other equipment within the data center implementing the network topology. Thus, to implement the network topology, the ports of the switches within the same or different racks can be coupled together with cables. The length of the traces on the PCB coupling the switch ASIC with the ports can provide a limitation to the length of the corresponding cables that are coupled with the ports to implement the network topology. For example, a longer trace on the PCB results in a shorter cable due to issues such as signal integrity (SI).