Electronic equipment is often located within a housing, such as an equipment rack used to hold computer servers and the like in assemblies that are located within the rack. The electronic equipment generates substantial heat that must be dissipated. Cool air typically passes through the housings to help dissipate heat. In many cases, fans located in the front door and/or back door and/or within the rack and/or in the top of the rack are used to circulate the cold air and expel the warmed air.
One solution proposes a front or back rack panel that is several inches thick, and carries ducting and fans to route air through the rack. Cool air enters the bottom of the front, and exits the top of the back. However, such thickened panels increase the depth of the racks, which inherently limits the number of racks that can be fit into a data center.
As with individual equipment racks, there are heat dissipation and energy consumption issues associate with data centers. Resource demands and constraints, including those related to power, represent a critical concern in the United States today. The increasing demand, and strain, placed upon electrical grids across the United States by data centers of all sizes is a material contributor to this issue.
The United States Environmental Protection Agency (EPA) addressed this issue in August 2007 and submitted a report to the United States Congress as part of public law to help define a vision for achieving energy efficiencies in data centers. The EPA predicts that by 2011, 2% of the United State's entire energy supply will be consumed by data centers.
Currently, data center managers are focused on the delivery of service and dependability. There has been little incentive, however, for data center managers to optimize the energy efficiency of their data center. In addition, the industry has not set any proper benchmarks for attainable energy efficiency targets, which further complicates the situation. Data center managers are primarily concerned about capital costs related to their data center's capacity and reliability. In most cases the energy costs are either hidden among other operating costs or simply absorbed as a cost of doing business. A study by the company IDC Global shows that for every $1.00 US of new server spend in 2005, $0.48 US was spent on power and cooling. This is a sharp increase from the year 2000, when the ratio was $0.21 US per $1.00 US of server spend. This ratio is anticipated to increase even further. It is expected, then, that the immediate demand to create more efficient data centers will be at the forefront of most company's cost saving initiatives.
Prior art legacy data centers typically have the following characteristics:
(1) An open air system that delivers cold air at approximately 55 degrees Fahrenheit (approximately 13 degrees Celsius) via overhead ducting, flooded room supply air, or a raised floor plenum;
(2) Perforated tiles (in a raised floor environment) that are used to channel the cold air from beneath the raised floor plenum into the data center;
(3) Computer racks, server enclosures and free-standing equipment orientated 180 degrees from alternate rows to create hot and cold aisles, which is an accepted best practice. Historically, however, information technology (IT) architecture has been the driving force in deciding the location of the racks and other equipment, leading to a disorganized and inefficient approach to air distribution;
(4) A minimum separation of 4 feet (approximately 1.22 meters) between cold aisles and 3 feet (approximately 0.91 meters) between hot aisles, based on recommendations from the American National Standards Institute (ANSI/TIA/EIA-942 April 2005), National Fire Protection Association (NFPA), National Electric Code (NEC), and local Authority Having Jurisdiction (AHJ);
(5) Dedicated precision air conditioning units located at the nearest perimeter wall and generally in close proximity to IT racks. However, optimal placement of the computer room air conditioner (CRAC) for free air movement is biased by structural columns, and often requires service clearances or other infrastructure accommodations;
(6) Traditional air conditioning systems are “turned on” on day one, and remain at full capability for cooling, even if only a small percentage of the design load is required; and
(7) Existing air conditioning systems have limitations and are sensitive to the location of heat loads in and around the data center, and therefore are not resilient to changing configurations and requirements.
In practice, the airflow in the legacy data center is very unpredictable, and has numerous inefficiencies, which are proliferated as power densities increase. Problems encountered in a data center include: bypass airflow, recirculation, hot and cold air remixing, air stratification, air stagnation, and uncomfortable data center ambient room temperature.
Bypass Airflow
Bypass airflow is defined as conditioned air that does not reach computer equipment. The most common form of bypass airflow occurs when air supplied from the precision air conditioning units is returned directly back to the air conditioner's intake. Examples of this form of bypass airflow may include leakage areas such as air penetrating through cable cut-outs, holes under cabinets, or misplaced perforated tiles that blow air directly back to the air conditioner's intake. Other examples of bypass airflow include air that escapes through holes in the computer room perimeter walls and non-sealed doors.
A recent study completed by engineers from UpSite Technologies, Inc.™ and Uptime Institute, Inc.® concluded that in conventional legacy data centers only 40% of the air delivered from precision air conditioning units makes its way to cool the existing information technology (IT) equipment. This amounts to a tremendous waste in energy, as well as an excessive and unnecessary operational expense.
Recirculation
Recirculation occurs when the hot air exhausted from a computing device, typically mounted in a rack or cabinet, is fed back into its own intake or the intake of a different computing device. Recirculation principally occurs in servers located at the highest points of a high-density rack enclosure. Recirculation can result in potential overheating and damage to computing equipment, which may cause disruption to mission-critical services in the data center.
Hot and Cold Air Remixing and Air Stratification
Air stratification in a data center is defined as the layering effect of temperature gradients from the bottom to the top of the rack or cabinet enclosure.
In general, in a raised floor environment, air is delivered at approximately 55 degrees Fahrenheit (approximately 13 degrees Celsius) from under the raised floor through perforated tiles. The temperature of the air as it penetrates the perforated tile remains the same as the supply temperature. As the air moves vertically up the rack however, the air temperatures gradually increase. In high-density rack enclosures it is not uncommon for temperatures to exceed 90 degrees Fahrenheit (approximately 32 degrees Celsius) at the server intakes mounted at the highest point of the rack enclosure. The recommended temperature range however, for server intakes, as stated by ASHRAE Technical Committee 9.9 Mission Critical Facilities, is between 68 and 77 degrees Fahrenheit (approximately 20 to 25 degrees Celsius).
Thus, in a legacy data center design, the computer room is overcooled by sending extremely cold air under the raised floor, simply because there is a lack of temperature control as the air moves upward through the rack or cabinet enclosure.
In addition, because the hot air and the cold air are not isolated, and tend to mix, dedicated air conditioning units are typically located close to the rack enclosures, which may not be the most efficient or economical placement. In some situations, the most efficient or economical solution may be to use the building's air conditioning system, rather than having air conditioning units that are dedicated to the data center, or a combination of dedicated air conditioning units and the building's air conditioning system.
Air Stagnation
Large data centers typically have areas where the air does not flow naturally. As a result, the available cooling cannot be delivered to the computing equipment. In practice, data centers may take measures to generate air flow in these areas by utilizing air scoops, directional vanes, oscillating floor fans, and active fan-based floor tiles.
Uncomfortable Data Center Ambient Room Temperature
Data center ambient room temperature is not conditioned to comfortable working conditions. The ambient air temperature in a data center is typically determined by inefficiencies between providing cool air and removing heated air.
There is a need in the art, then, for improved methods for heat dissipation in equipment racks, and improved systems and methods for heat containment and cold air isolation in data centers. In particular, there is a need to remedy the typical problems encountered in a data center, including bypass airflow, recirculation, hot and cold air remixing, air stagnation, air stratification, and uncomfortable data center ambient room temperature. Improved systems and method are needed to eliminate wasted conditioned air and increase air conditioner efficiency.