Conventional data centers occupy large rooms with the requirement of controlled environmental conditions. The data storage and processing equipment generally takes the form of servers, storage arrays, network equipment such as routers, switches and other heat producing computer equipment, which are mounted in standard rack cabinets, and the cabinets being arranged in a series of rows in the room. It is essential to cool rack mounted computer equipment to remove the excess heat generated by their processors and other components, so complex air conditioning systems are required to maintain the desired temperature and humidity in the room. These computer room air conditioning units have large power demands in addition to the computer electrical loads, to the extent that in some cases it is the capacity of local electricity grids which place limits on the maximum size of data centers, thereby limiting its ultimate growth potential.
The internet has caused an ever increasing demand for data storage and processing capacity. Particularly in recent years a massive growth in Internet services such as streaming of high quality video content has resulted in a corresponding massive growth in the capacity and performance demands placed on data centers serving this content. The volume of corporate data that must be securely stored in data warehouses also continues to grow rapidly. Corporate and government computer systems have also increased exponentially.
Further, the ongoing trend of increasingly more powerful CPUs along with multiple CPUs per server, combined with smaller form factor servers (i.e. 1U rack mounted servers and blade servers) has exacerbated the amount of heat produced (and the need to cool them) in rack mounted computer equipment by tenfold or more, over the last few years.
It is now common to have a 1U server (1U=Rack Unit which is 1.75′ high) that uses 500-1000 W of power and emits 1500-3000 BTUs of heat. This requires extremely powerful high speed fans that are limited in size by the height of the 1.75′ server case. These fans require a lot of power to generate sufficient high velocity air flow to cool the internal heat producing components of the server. This fan energy becomes a significant part (5-10%) of the energy used by the server and increases the overall energy usage in each server and therefore in the data center.
In addition, it also means that when a computer rack cabinet is filled with 40 of these 1U type servers it can require 20-40 KW of power and can require 60,000-120,000 BTUs (5-10 tons) of cooling for a single rack. (The above also applies to blade servers which are designed to house as many or even more CPUs per rack) This heat density is far beyond the cooling capabilities of most raised floor data centers.
This not only means that it limits the amount of high density racks in the data center, since it raises the average power per square foot in the room beyond the ability of the cooling systems. This also results in higher and higher airflow rates in the data center cooling units in an attempt to provide sufficient cold air to each rack via the raised floor air flow. Therefore more fan energy is required to try to provide sufficient airflow to try to cool these higher density heat loads, (in addition to the actual higher BTUs cooling loads) resulting in more overall energy being used by the more powerful internal fans of the servers, plus the high fan energy of the cooling units to provide enough cold air to the high density servers.
This invention is able to effectively overcome the inherent limitations of air flow based cooling of high density servers. It also improves the overall energy efficiency of the servers and the data center cooling systems by lowering the overall amount of fan energy required. In addition, it also lowers the amount of space required on the data center floor by cool systems since the majority of the heat would be removed via the fluid piping assembly contained within the rack cabinets.
Providers of data center technology have responded to the demand by increasing processor and data storage density. However, despite improvements in processor efficiency, increases in processing power increases the heat generated by the servers' processors and it becomes difficult to effectively cool the processors using conventional approaches because of the load that is put on the air conditioning systems and subsequent costs.
Limitations in the ability to cool processors place serious overheating limits on the capacity of data centers, which if exceeded can cause overly hot servers which potentially will lead to malfunctions, reduced mean time before failure (MTBF) and unexpected thermal shutdowns.
Current industry practice for heat removal from most modern computer equipment are based on internal fans forcing ambient air thought the computer system cabinet. The majority of heat generating internal components (i.e. main CPU processor chips and Power supplies) have attached heat sinks that transfer the heat to the air by the use of fans forcing air through the computer chassis. This requires that the air in front of the computer is relatively cool (65-75° F.). Warmed air is exhausted out of the back of the computer (approximately +20-30° F. warmer, at 85-105° F.) and then is drawn back into a cooling system. In addition, multiple computers are usually mounted together in a cabinet (rack) to save floor space, so that the heat buildup is high within the rack.
Racks now can hold enough servers and CPUs so that the power per rack can easily reach 30 KW, which produces over 100,000 BTUs of heat. Using air as a means of heat removal is becoming extremely problematical to effectively cool the servers, and very energy inefficient. Moreover, multiple racks are lined up in rows, arranged in so called “Hot Aisles and Cold Aisles” to improve the ability and efficiency cooling of the room. Over the past few years the average power per computer server has risen and the number of servers that can fit into each rack has increased dramatically. The average heat load in a computer data center has risen from 35-50 W/sf to 350-500 W/sf and rising. This is a known and growing problem in modern computer data centers. This is commonly known as a High-Density configuration where the heat load can be as high as 50 KW per rack and rising.
There are various types of computer cooling systems. i.e. chilled water, glycol/condenser water and Direct Expansion. All of these essentially have to use the same ultimate heat transfer path—by cooling and circulating air in the enclosed computer room (or enclosed rack) so that there 65-75° F. air available at the intake of the computer.
This airflow based heat transfer process is relatively inefficient for multiple reasons: Air is an inefficient conductor of heat when compared liquids or solid metal. The warm air has to travel a large distance from heat source (the internal components of the computer), out of the back of the computer rack, into the ambient air in the computer room, before it can be circulated back to the cooling coils of the cooling unit.
There have been recent improvements that have reduced the distance from the computer to the cooling coil, generally called “close coupled cooling” wherein the cooling coil in placed in or above the row of cabinets so that there is a much shorter distance and the process is more efficient. There are also cooling systems that the cooling coil is part of the rack cabinet that are fully enclosed to further improve the heat transfer process.
All of these improvements are based on the need to support the current practice of using forced airflow through the computer equipment as a required part of the heat transfer process, since this is how computer equipment is made today.
The current practice of rack mounted computer equipment is based on a common standard of all computer and computer rack enclosure manufacturers, so virtually all rack mounted computer equipment can fit into a rack and the airflow is front to back.
While it is well know that air cooling is a very inefficient method of heat removal from computer systems, the current industry practice is still based on this because of the simplicity of installing a server into any location without any direct connection to the cooling system.
Some manufacturers of cooling equipment have tried to improve the efficiency of air cooling by relocating the cooling units to be in close proximity to the computer. This is generally referred to as “close coupled cooling” and involves locating the cooling coils near or in the computer rack. This is an improvement, since the air does not have to travel as far as traditional room cooling systems, but still requires that air be used as the heat transfer medium, which is not as effective or as efficient as a liquid or solid metal conductor for transferring heat.
It is known that the thermal conductivity of water is much greater than that of air. Recently, providers of data center equipment have tried to use direct liquid cooling as an alternative to the traditional air-cooling. Chilled water or any other cooling liquid is piped directly into the interior of the computer chassis to the heat producing components such as the CPU. This requires specialized hardware and plumbing for each server and in the cabinets and/or racks in which the servers are mounted to remove heat more efficiently.
However, direct fluid based cooling (to the interior of computer equipment) is not very practical because it involves fluids within the computer equipment and the attachment of hoses and piping which could leak into the computer equipment. This invention overcomes this problem by keeping all fluids external to the computer equipment, but avoids the limitations of using air as the heat transfer medium, thus allowing the continued use of existing industry standards for rack mounting of computer equipment into cabinets without introducing any fluids into the computer equipment.