The throughput of communications, between multiple computing devices that are transmitted via network connections, continues to increase. Modern networking hardware enables physically separate computing devices to communicate with one another orders of magnitude faster than was possible with prior generations of networking hardware. Furthermore, high-speed network communication capabilities are being made available to a greater number of people, both in the locations where people work, and in their homes. As a result, an increasing amount of data and services can be meaningfully provided via such network communications. Additionally, it has become more practical to perform digital data processing at a location remote from the user requesting such processing, or on whose behalf such processing is being performed. Consequently, large quantities of data processing capability are being aggregated into centralized locations that comprise dedicated hardware and support systems. The large quantities of data processing offered by such centralized locations can then be shared across networks.
To provide such large-scale data and processing capabilities, via network communications, from a centralized location, the centralized location typically comprises hundreds or thousands of computing devices, typically mounted in vertically oriented racks. Such a collection of computing devices, as well as the associated hardware necessary to support such computing devices, and the physical structure that houses the computing devices and associated hardware, is traditionally referred to as a “data center”. With the increasing availability of high-speed network communication capabilities, and thus the increasing provision of data and services from centralized locations, as well as the traditional utilization of data centers, such as the provision of advanced computing services and massive amounts of computing processing capability, the size and quantity of data centers continues to increase.
However, computing devices consume energy and generate heat when performing processing. The aggregation of large quantities of computing devices in a single data center results in large amounts of power consumption and large quantities of heat being generated that must be removed in order to enable the computing devices to continue to operate optimally and avoid overheating. Traditionally, data center power is provided by electricity sourced from a conventional electrical power grid and delivered to the various computing devices and support hardware through common metal-wire electrical connections. Similarly, traditionally, data center cooling is provided by forced-air mechanisms that deliver cool air into a data center and remove hot air therefrom. The cool air is typically provided by cooling recirculated air through the use of power-consuming cooling methodologies, such as air-conditioning. The power consumed by the computing devices, support hardware and air conditioning can introduce substantial cost into the operation of a data center. For example, large air conditioning units, such as are typically required by a data center, can consume large quantities of electrical power, often during the most expensive times of the day, resulting in high energy costs.