Densification in data centers is becoming so extreme that the power density of the systems in the center is growing at a rate unmatched by technology developments in data center heating, ventilation, and air-conditioning (HVAC) designs. Current servers and disk storage systems generate 10,000 to 20,000 watts per square meter of footprint. Telecommunication equipment may generate two to three times the heat of the servers and disk storage systems. Liquid-cooled computers could solve this heat transfer problem, however, there is reluctance by both end users and computer manufacturers to make the transition from air-cooled computers to liquid-cooled computers. Also, there currently is no easy way to transition from an air-cooled data center to a liquid-cooled data center without a major overhaul of the data center and substantial down time to retrofit the data center.
Computer designers are continuing to invent methods that extend the air-cooling limits of individual racks of computers (or other electronic heat-generating devices) that are air-cooled. However, these high heat capacity racks require extraordinary amounts of air to remove the heat dissipated by the racks, requiring expensive and large air handling equipment.
Many modern data centers utilize a system utilizing a raised floor configured as a supply air plenum. Large HVAC units take air from near the ceiling of the data center, chill the air, and blow the cold air into the plenum under the raised floor. Vents in the floor near the servers allow cold air to be pulled up from the plenum, through the rack and the now warm air is blown out the back of the rack where it rises to the ceiling and eventually is pulled in to the HVAC units to begin the cycle anew. However, this type of system is limited in that it can only handle power of about 1600 to 2100 watts per square meter, significantly under the heat generated by many current electronic systems. Thus, the data center must contain significant amounts of empty space in order to be capable of cooling the equipment. Also, use of the under floor plenum has difficulties in that airflow is often impeded by cabling and other obstructions residing in the plenum. Further, perforated tiles limit airflow from the plenum into the data center to approximately 6 cubic meters per minute, well below the 60 cubic meters per minute required by some server racks. Even the use of blowers to actively pull cold air from the plenum and direct it to the front of the rack is insufficient to cool many modern servers. Balancing the airflow throughout the data center is difficult, and often requires a substantial amount of trial and error experimentation. Finally, the airflow is somewhat inefficient in that there is a substantial amount of mixing of hot and cold air in the spaces above the servers and in the aisles, resulting in a loss of efficiency and capacity.
In an attempt to increase the efficiency of raised floor plenum designs, some designers incorporate a large number of sensors through out the data center in an attempt to maximize the efficiency of the data center cooling with either static or dynamic provisioning cooling based on environmental parameters using active dampers and other environmental controls. Others may use a high pressure cooling system in an attempt to increase the cooling capacity of the raised floor plenum design. However this technique still has all of the inefficiencies of any raised floor plenum design and only increases the power handling capacity of the data center to about 3200 watts per square meter, still below the requirements of densely packed servers or telecommunication devices.
In a desperate attempt to increase cooling capabilities of a data center, some designers use an entire second floor to house their computer room air-conditioners (CRAC's). While this allows the use of large numbers of CRAC's without use of expensive data center floor space, it effectively acts as a large under floor plenum and is subject to the same inefficiencies and limitations of the under floor plenum design.
Other designers include air coolers within the server racks. For example, a liquid to air heat exchanger may be included on the back of a server rack to cool the air exiting the rack to normal room temperature. However, the airflow of the heat exchanger fans must match the airflow of the server precisely to avoid reliability and operational issues within the server. Also by mounting the heat exchanger on the racks, serviceability of the racks is reduced and the fluid lines attached to the rack must be disconnected before the rack may be moved. This results in less flexibility due to the presence of the liquid line and may require plumbing changes to the area where the rack is being moved to. Also, this technique does not directly cool the heat generating integrated circuits, it is simply a heat exchanger which is not as efficient as direct liquid cooling of the integrated circuits.
Another possibility is the use of overhead cooling which may offer cooling densities in the order of 8600 watts per square meter. However such overhead devices require a high ceiling that also must be strong enough to support the coolers. Also, in such a design, there is no easy migration route from air-cooled to liquid-cooled servers, and some users are concerned about the possibility of leaks from the overhead coolers dripping onto, and possibly damaging, their servers.