Computers are customarily provided with computer cage structures, which may comprise a sheet metal framework and which may contain a backplane. A backplane is a circuit board (e.g., mother card) or framework that supports other circuit boards, devices, and the interconnections among devices, and provides power and data signals to supported devices. The mother card may be the main circuit card in the computer, which may interconnect additional logical cards and assemblies. The computer cage structure is adapted to receive and removably support at least one and preferably a plurality of options or daughter cards (blades or nodes) which when operatively installed in their associated cage structure, upgrade the operating capabilities of the computer. For example, it is known to place an assembly, including a backplane and various circuit boards, such as a processor card, an input-output card and a so-called memory riser card, within an open cage. This forms a so-called central electronics complex (CEC) or cage of a computer system. The cage is subsequently fixed within a computer housing.
A standard containing enclosure or cage protects the mother card and individual daughter cards and facilitates the easy insertion and removal of the daughter cards from a mother card (mother board) or backplane slot. These daughter cards may be installed in the computer during the original manufacture of the computer and or subsequently installed by the computer purchaser. The cage serves to position and mechanically support the circuit boards within the computer housing, and acts as an electromagnetic compatible (EMC) shield. An EMC shield allows operation in an electromagnetic environment at an optimal level of efficiency, and allows static charges to be drained to a frame ground. Moreover, the cage helps to protect the components contained therein from environmental damage, for example, vibrations, which could cause the components to fail.
Additionally, the cage is typically fixed within a so-called system chassis, which is a frame that provides further support for the cage, and which is removably stacked upon other system chassises within a system rack. The chassis may contain other components and sub-systems, such as power supplies and cooling fans, for example, which are connected to the components within the cage using cables, for instance.
A daughter card may include a relatively small rectangular printed circuit having a connecter along one side edge. A 20″×24″ node or server may weigh over a hundred pounds, for example. The mother card or system backplane slot has an electrical connector. The daughter card connector plugs into a corresponding electrical connector of the mother card to operatively couple the daughter card to the mother card or system backplane slot. In order to allow the circuit boards or daughter cards to be connected to the backplane, it is also typical to position the backplane at a middle of the cage, and in a vertical position. This allows the circuit boards or daughter cards to be plugged into the card slots of the backplane through the open front, for example, of the cage.
Data processing systems in general and server-class systems in particular are frequently implemented with a server chassis or cabinet having a plurality of racks. Each cabinet rack can hold a rack mounted device (e.g., a daughter card, also referred to herein as a node, blade or server blade) on which one or more general purpose processors and/or memory devices are attached. The racks are vertically spaced within the cabinet according to an industry standard displacement (the “U”). Cabinets and racks are characterized in terms of this dimension such that, for example, a 42U cabinet is capable of receiving 42 1U rack-mounted devices, 21 2U devices, and so forth. Dense server designs are also becoming available, which allow a server chassis to be inserted into a cabinet rack, thus allowing greater densities than one server per 1U. To achieve these greater densities, the server chassis may provide shared components, such as power supplies, fans, or media access devices which can be shared among all of the blades in the server blade chassis.
Problems have arisen, for example, with the advent of employing daughter cards such as the large massive Processor-Memory cards. Recent system architectures have migrated to using multiples of these large cards (parallel to each other) installed in a vertically orientation and perpendicular to the CEC motherboard. However, inherent in such an architecture are difficulties in cooling these cards and CEC board.
For example, with the advent of multichip modules (MCMs), containing multiple integrated circuit (IC) chips each having many thousands of circuit elements, it has become possible to pack great numbers of electronic components together within a very small volume. As is well known, ICs generate significant amounts of heat during the course of their normal operation. Since most semiconductor or other solid state devices are sensitive to excessive temperatures, a solution to the problem of the generation of heat by IC chips in close proximity to one another in MCMs is of continuing concern to the industry.
Current state-of-art cooling requires either staggering the MCMS away from the midplane connector if air cooled or using water or refrigerant cooling when the MCMs are optionally placed next to the midplane. Air cooling multiple high powered MCMs in series along the midplane is ineffective due to air temperature rise and serial airflow impedances.
Since high-end server performance often requires placing numerous high powered logic modules in close proximity to a common vertical midplane, prior art central electronic complexes (CECs) have been unable to be air cooled in such an arrangement due to an inability of removing heat by serial airflow through these logic modules or MCMs.
Secondly, and perhaps more critical to server performance, as the logic voltage drops with new chip generations, higher currents and I2R losses result. In particular, the printed circuit board midplane that delivers the power from power supplies to the logic modules and interconnects can generate 1000 watts to 2000 watts due to high currents and I2R losses. With low voltages in new CMOS, the currents are increasing dramatically. These currents are carried from the power supplies through the midplanes that interconnect the nodes. Even if prior art water or refrigeration cooling is used on the logic modules, such cooling is unable to cool more than 200 to 300 watts effectively as the conductive thermal path from the power planes to the aluminum stiffener and the convective performance of the stiffener are both limited.
Prior art midplanes have been cooled by airflow flowing over the stiffener parallel to the midplane. Midplane heat is removed via conduction through the insulative epoxy glass where the stiffener may contact the electrically isolated epoxy glass. Unfortunately, because of the insulative properties of the epoxy glass, this approach works for only about 200 watts or at most 300 watts under most reasonable airflows and temperature specifications.
For the foregoing reasons, therefore, there is a need for enabling significantly higher heat loads of logic, I/O memory, and power supplies. Further, there is a need to more efficiently cool the components of the CEC with a symmetrical, balanced airflow through the various nodes to support higher generated power and which enables low temperature specifications on components placed at the logical end of heated exhaust air.