Organizations such as on-line retailers, Internet service providers, search providers, financial institutions, universities, and other computing-intensive organizations often conduct computer operations from large scale computing facilities. Such computing facilities house and accommodate a large amount of server, network, and computer equipment to process, store, and exchange data as needed to carry out an organization's operations. Typically, a computer room of a computing facility includes many server racks. Each server rack, in turn, includes many servers and associated computer equipment.
Because the computer room of a computing facility may contain a large number of servers, a large amount of electrical power may be required to operate the facility. In addition, the electrical power is distributed to a large number of locations spread throughout the computer room (e.g., many racks spaced from one another, and many servers in each rack). Usually, a facility receives a power feed at a relatively high voltage. This power feed is stepped down to a lower voltage (e.g., 110V). A network of cabling, bus bars, power connectors, and power distribution units, is used to deliver the power at the lower voltage to numerous specific components in the facility.
Circuit board assemblies, power supplies, and hard disk drives all generate heat during operation. Some or all of this heat must be removed from the hard disk drives to maintain continuous operation of a server. The amount of heat generated by the circuit board assemblies, power supplies, and hard disk drives within a data room may be substantial, especially if all of the computing devices are fully powered up at all times.
In the design of a typical data center, efforts are typically made to implement a suitable amount of computing devices for a given amount of space, cooling, and electrical power resources. Various aspects of a data center may be sub-optimal, however. The configurations of computing devices in a rack system or data center, for example, may not take full advantage of infrastructure resources (for example, cooling, power, or space) that could be made available to the computing devices. For example, in some rack systems, the density of computing devices achieved in a rack is too low to utilize all of the resources available to the rack, such as data ports, electrical power, or cooling capacity. On the other hand, the configuration of computing devices in a rack or a data center may overload a rack power distribution (for example, trip a breaker in a rack power distribution unit). The effect of various different component choices, firmware and hardware settings, and operating conditions on power draw and heat loads may not be known. In addition, the effect of power draw of computing devices on different environmental/cooling system conditions (for example, temperature and humidity), and vice versa, may not be known. For these reasons among others, data centers may not be optimized from the standpoint of cost or efficiency.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include,” “including,” and “includes” mean including, but not limited to.