As the cost of computing power continues to decrease, an increasing number of users deploy ever more sophisticated applications on ever more capable computing devices. Many applications are implemented most efficiently on specialized equipment such as servers which are capable of processing and responding to requests from many users simultaneously (e.g., requests for web pages, etc.). Often these servers and associated equipment such as switching, routing, patching and data storage systems and telecommunications equipment are operated in large facilities referred to as “data centers.” Commercial data centers include Internet data centers, which are typically operated by Internet service providers, enterprise data centers, which are operated by corporations to support their businesses, and so-called co-location data centers that are run by companies that support data center operations for other entities. As it can be critically important to businesses that these computing operations are available and properly functioning twenty-four hours a day, seven days a week, most data centers employ extensive redundancy and take other precautions to minimize the possibility that various systems in the data center malfunction and/or become unavailable.
In a typical data center, the servers and other computer equipment are mounted in cabinets and racks that may comply with industry standardized design specifications. Herein, these cabinets and racks are generically referred to as “equipment racks,” and it will be understood that the term “equipment racks” as used herein encompasses both open-frame racks and racks/cabinets having sidewalls, back walls, doors and the like. The equipment racks are typically aligned in rows with aisles running therebetween. Conventionally, a single row of equipment racks is provided between two adjacent aisles in order to allow technicians convenient access to both the front side and the back side of each equipment rack, which may make it easier for the technicians to make patching changes, swap out equipment, plug in power cords, perform repairs and the like. This configuration may facilitate maintaining the temperature within the data center within desired ranges using a conventional hot aisle/cold aisle arrangement that is discussed in more detail below. A side of an equipment rack such as the front side or the back side that faces an aisle is referred to herein as an “aisle face” of the equipment rack. Data centers also typically have raised floors, and cabling that connects the computer equipment may be routed through conduits provided under the raised floors and/or in overhead cabling trays.
Another factor of great importance in data centers is to minimize operational cost, including energy consumption. As data centers continue to grow, their energy consumption is becoming an ever more scrutinized parameter. Metrics have been developed to measure the energy efficiency of data centers. One very common metric is referred to as Power Usage Effectiveness (“PUE”). This measure is the ratio of power delivered to the data center divided by the power delivered to the electronic equipment such as servers, storage equipment and network equipment. The closer to 1.0 the PUE, the better. According to the Uptime Institute, the average PUE was 2.5 in 2007 but, based on significant effort by the industry is down to 1.65 in mid-2013. There is great interest in further reduction in the energy consumption of data centers.
Data centers also typically include extensive electrical power distribution and cooling systems. The electronic equipment in a data center can produce large amounts of heat, and care must be taken to ensure that each item of electronic equipment is operated within its specified operating temperature ranges. Extensive design work may be performed in an effort to ensure that data centers maintain the electronic equipment within the specified operating temperature ranges.