1. Field of the Invention
The present invention relates generally to systems and methods for managing fuel cell devices, and more particularly to managing electrically isolated fuel cell powered devices within an equipment rack.
2. Discussion of Background Art
Modern service and utility based computing is increasingly driving enterprises toward consolidating large numbers of electrical servers, such as blade servers, and their supporting devices into massive data centers. A data center is generally defined as a room, or in some cases, an entire building or buildings, that houses numerous printed circuit (PC) board electronic systems arranged in a number of racks. Such centers, of perhaps fifty-thousand nodes or more, require that such servers be efficiently networked, powered, and cooled.
Typically such equipment is physically located within a large number of racks. Multiple racks are arranged into a row. The standard rack may be defined according to dimensions set by the Electronics Industry Association (EIA) for an enclosure: 78 in. (2 meters) wide, 24 in. (0.61 meter) wide and 30 in. (0.76 meter) deep.
Standard racks can be configured to house a number of PC boards, ranging from about forty (40) boards, with future configuration of racks being designed to accommodate up to eighty (80) boards. Within these racks are also network cables and power cables. FIGS. 1A through 1D each show an example of what such equipment racks can look like. FIG. 1A is a pictorial diagram of electrical cabling within a first equipment rack. FIG. 1B is a pictorial diagram of electrical cabling within a second equipment rack. FIG. 1C is a pictorial diagram of electrical cabling within a third equipment rack. And, FIG. 1D is a pictorial diagram of electrical cabling within a fourth equipment rack.
The PC boards typically include a number of components, e.g., processors, micro-controllers, high-speed video cards, memories, and semi-conductor devices, that dissipate relatively significant amounts of heat during the operation. For example, a typical PC board with multiple microprocessors may dissipate as much as 250 W of power. Consequently, a rack containing 40 PC boards of this type may dissipate approximately 10 KW of power.
Generally, the power used to remove heat generated by the components on each PC board is equal to about 10 percent of the power used for their operation. However, the power required to remove the heat dissipated by the same components configured into a multiple racks in a data center is generally greater and can be equal to about 50 percent of the power used for their operation. The difference in required power for dissipating the various heat loads between racks and data centers can be attributed to the additional thermodynamic work needed in the data center to cool the air. For example, racks typically use fans to move cooling air across the heat dissipating components for cooling. Data centers in turn often implement reverse power cycles to cool heated return air from the racks. This additional work associated with moving the cooling air through the data center and cooling equipment, consumes large amounts of energy and makes cooling large data centers difficult.
In practice, conventional data centers are cooled using one or more Computer Room Air Conditioning units, or CRACs. The typical compressor unit in the CRAC is powered using a minimum of about thirty (30) percent of the power required to sufficiently cool the data centers. The other components, e.g., condensers, air movers (fans), etc., typically require an additional twenty (20) percent of the required cooling capacity.
As an example, a high density data center with 100 racks, each rack having a maximum power dissipation of 10 KW, generally requires 1 MW of cooling capacity. Consequently, air conditioning units having the capacity to remove 1 MW of heat generally require a minimum of 300 KW to drive the input compressor power and additional power to drive the air moving devices (e.g., fans and blowers).
Quite clear from these Figures, technicians, who install and service these cable intensive racks, are presented with a substantial amount of work each time such electrical servers are installed, removed, or serviced. With such wiring complexity, not only do such tasks require a significant amount of time, to wade through all of the wires and cables, but there is also a substantial chance that errors will be made during reinstallation, especially if more than one server unit is serviced at a time. Such excessive cabling also impedes equipment inspection and substantially impedes the flow of cooling air within the equipment rack, leading to device hot-spots and thus premature equipment failure.
Another problem with conventional systems is that each equipment rack's power needs can vary substantially, depending upon: how many servers or other devices are located in the rack; whether such devices are in a standby mode or are being fully utilized; and the variations in rack cabling losses. While central high-voltage/current power sources located elsewhere in the data center can provide the necessary power, the aforementioned power consumptions variations often result in greater overall data center transmission line losses, and more power-line transients and spikes, especially as various rack equipment goes on-line and off-line. Due to such concerns, power-line conditioning and switching equipment is typically added to each rack, resulting in even greater losses and heat generation.
Reliance on central power systems also subjects the racks to data center wide power failure conditions, which can result in disruptions in service and loss of data. While some equipment racks may have a battery backup, such batteries are designed to preserve data and permit graceful server shutdown upon experiencing a power loss. The batteries are not designed or sized for permitting equipment within the rack to continue operating at full power though.
Each equipment rack's cooling needs can also vary substantially depending upon how many servers or other devices are located in the rack, and whether such devices are in a standby mode, or being fully utilized. Central air conditioning units located elsewhere in the data center provide the necessary cooling air, however, due to the physical processes of ducting the cooling air throughout the data center, a significant amount of energy is wasted just transmitting the cooling air from the central location to the equipment in the racks. Cabling and wires internal to the rack and under the data center floors blocks much of the cooling air, resulting in various hot-spots that can lead to premature equipment failure.
One way of reducing energy wasted by ducting cooling air from a central source to equipment within the racks is to directly cool various rack components using liquid cooling. Such systems include surrounding equipment with liquid cooled “cold-plates.” Such cold-plates may alternatively be mounted inside the equipment proximate to specific heat generating components. However, while such liquid cooling systems provide greater control and targeting of coolant to where it is needed most, such liquid systems also create a safety and reliability problem when interspersed with a rack's electrical cabling. Accidental spills, condensation, and/or leaky connections can easily damage or short-out various electrical equipment within the rack, resulting not only in degradation of the data center's level of service, but also a potentially very expensive repair bill.
In response to the concerns discussed above, what is needed is a system and method for managing fuel cell devices that overcomes the problems of the prior art.