Data centers are required to move petabits of data per second, demands are constantly growing, and costs must continuously be contained in order for operations to be profitable. Space and power are key monetization considerations within data centers, driving a need for improved efficiency in the delivery of data over status quo data networking equipment designs. It would seem that data networking equipment manufacturers would take the following approaches: 1) seek multi-source agreement (MSA) consensus regarding the implementation of the reduction in size of existing transceiver pluggable form factors so as to implement improved port density of data networking gear faceplates; 2) develop new optical interface cards, cables, and connectors to reduce the amount of fiber required within a data center; and 3) achieve greater data rates per optical fiber via new, higher speed standards. To a large extent, however, these things have not occurred.
Data centers have already made significant investments in transceivers to achieve status quo data transport rates and cost favours the continued use of existing transceivers versus the adoption of, and investment into, a new set of transceivers that achieve the same status quo data transport rates. MSA consensus takes time and requires early adoption, which costs may not enable. A new transceiver form factor does not necessarily efficiently mitigate equipment faceplate surface area as a bottleneck to the number of hosted transceivers and, therefore, delivered traffic per second. Optical transceivers are pluggable in nature due to the number of possible optical interface standards, wavelengths, and transmit power levels deployed at the connected interface at the far end of the optical fiber. The development of a card that can interface with a long list of possible far end standards is likely not affordable for most manufacturers. The acceptance of a card implementing a requirement for a rigid set of far end attributes would likely not offer sufficient deployment flexibility to gain industry adoption. Further, data rates are constrained by industry adoption and limits imposed by physics.
Current solutions do not enable the strategic positioning of the aggregate housing of transceiver interfaces, such as at the top or bottom position of a rack so as to mitigate the extension of fiber deployment within the rack. Thus, current solutions do little to mitigate the space consumption and cost of the fiber itself. Further, current solutions do not make modular the relationship between a potentially “infinite” pool of aggregated transceiver interfaces with any number of data traffic motherboard\processor chassis based upon interconnection and bandwidth requirements.
Conventional data networking equipment is typically interconnected using optical fiber (a thin glass fiber through which light can be transmitted). At the termination point of the optical fiber is a polished tip, as well as a connector, that secures the alignment of the optical fiber to the transmit and receive components of the fiber optic side of a pluggable transceiver. The device facing side of these pluggable transceivers generally interconnects with the device motherboard\processor(s) using a form factor and electrical interface specified by a MSA among competing pluggable transceiver manufacturers. Pluggable transceivers generally plug individually into one of many possible MSA-compliant electrical interfaces on the front faceplate of the same physical chassis that hosts the motherboard\processor(s) that handle the ingress\egress traffic transported on the connected optical fibers.
What is still needed in the art, however, is a technology that moves the housing of the pluggable transceivers, and the connected fibers, away from the faceplate of the chassis that hosts the motherboard\processor(s), thereby enabling the strategic positioning of the aggregate housing of transceiver interfaces, such as at the top or bottom position of a rack so as to mitigate the extension of fiber deployment within the rack, and mitigating the space consumption and cost of the fiber itself. This would make modular the relationship between the potentially “infinite” pool of aggregated transceiver interfaces with any number of data traffic motherboard\processor chassis based upon interconnection and bandwidth requirements.