1. Field of the Invention
The present invention relates generally to systems and methods for powering equipment racks, and more particularly to providing electrical power to an equipment rack using a fuel cell.
2. Discussion of Background Art
Modern service and utility based computing is increasingly driving enterprises toward consolidating large numbers of electrical servers, such as blade servers, and their supporting devices into massive data centers. A data center is generally defined as a room, or in some cases, an entire building or buildings, that houses numerous printed circuit (PC) board electronic systems arranged in a number of racks. Such centers, of perhaps fifty-thousand nodes or more, require that such servers be efficiently networked, powered, and cooled.
Typically such equipment is physically located within a large number of racks. Multiple racks are arranged into a row. The standard rack may be defined according to dimensions set by the Electronics Industry Association (EIA) for an enclosure: 78 in. (2 meters) wide, 24 in. (0.61 meter) wide and 30 in. (0.76 meter) deep.
Standard racks can be configured to house a number of PC boards, ranging from about forty (40) boards, with future configuration of racks being designed to accommodate up to eighty (80) boards. Within these racks are also network cables and power cables, such as shown in FIG. 1. The PC boards typically include a number of components, e.g., processors, micro-controllers, high-speed video cards, memories, and semi-conductor devices, that dissipate relatively significant amounts of heat during the operation. For example, a typical PC board with multiple microprocessors may dissipate as much as 250 W of power. Consequently, a rack containing 40 PC boards of this type may dissipate approximately 10 KW of power.
Generally, the power used to remove heat generated by the components on each PC board is equal to about 10 percent of the power used for their operation. However, the power required to remove the heat dissipated by the same components configured into a multiple racks in a data center is generally greater and can be equal to about 50 percent of the power used for their operation. The difference in required power for dissipating the various heat loads between racks and data centers can be attributed to the additional thermodynamic work needed in the data center to cool the air. For example, racks typically use fans to move cooling air across the heat dissipating components for cooling. Data centers in turn often implement reverse power cycles to cool heated return air from the racks. This additional work associated with moving the cooling air through the data center and cooling equipment, consumes large amounts of energy and makes cooling large data centers difficult.
In practice, conventional data centers are cooled using one or more Computer Room Air Conditioning units, or CRACs. The typical compressor unit in the CRAC is powered using a minimum of about thirty (30) percent of the power required to sufficiently cool the data centers. The other components, e.g., condensers, air movers (fans), etc., typically require an additional twenty (20) percent of the required cooling capacity.
As an example, a high density data center with 100 racks, each rack having a maximum power dissipation of 10 KW, generally requires 1 MW of cooling capacity. Consequently, air conditioning units having the capacity to remove 1 MW of heat generally require a minimum of 300 KW to drive the input compressor power and additional power to drive the air moving devices (e.g., fans and blowers).
One problem with current rack power systems is a complete reliance on a central power grid. Such reliance subjects equipment racks to data center wide power failure conditions, which can result in disruptions in service and loss of data. While some equipment racks may have a battery backup, such batteries are only designed to preserve data and permit graceful server shutdown upon experiencing a power loss. Such batteries are not designed or sized for permitting equipment within the rack to continue operating at full power though.
Another problem with conventional systems is that each equipment rack's power needs can vary substantially, depending upon: how many servers or other devices are located in the rack; whether such devices are in a standby mode or are being fully utilized; and the variations in rack cabling losses. While central high-voltage/current power sources located elsewhere in the data center can provide the necessary power, the aforementioned power consumptions variations often result in greater overall data center transmission line losses, and more power-line transients and spikes, especially as various rack equipment goes on-line and off-line. Due to such concerns, power-line conditioning and master switching equipment is typically added to each rack, resulting in even greater losses and heat generation.
Each equipment rack's cooling needs can also vary substantially depending upon how many servers or other devices are located in the rack, and whether such devices are in a standby mode, or being fully utilized. Central air conditioning units located elsewhere in the data center provide the necessary cooling air, however, due to the physical processes of ducting the cooling air throughout the data center, a significant amount of energy is wasted just transmitting the cooling air from the central location to the equipment in the racks. Cabling and wires internal to the rack and under the data center floors blocks much of the cooling air, resulting in various hot-spots that can lead to premature equipment failure.
In response to the concerns discussed above, what is needed is a system and method for powering equipment racks that overcomes the problems of the prior art.