Computer users often focus on the speed of computer microprocessors (e.g., megahertz and gigahertz). Many forget that this speed often comes with a cost-higher power consumption. This power consumption also generates heat. That is because, by simple laws of physics, all the power has to go somewhere, and that somewhere is, in the end, conversion into heat. A pair of microprocessors mounted on a single motherboard can draw hundreds of watts or more of power. Multiply that figure by several thousand (or tens of thousands) to account for the many computers in a large data center, and one can readily appreciate the amount of heat that can be generated. The effects of power consumed by the critical load in the data center are often compounded when one incorporates all of the ancillary equipment required to support the critical load.
Many techniques may be used to cool electronic devices (e.g., processors, memories, networking devices, and other heat-generating devices) that are located on a server or network rack tray. For instance, forced convection may be created by providing a cooling airflow over the devices. Fans located near the devices, fans located in computer server rooms, and/or fans located in ductwork in fluid communication with the air surrounding the electronic devices, may force the cooling airflow over the tray containing the devices. In some instances, one or more components or devices on a server tray may be located in a difficult-to-cool area of the tray; for example, an area where forced convection is not particularly effective or not available.
The consequence of inadequate and/or insufficient cooling may be the failure of one or more electronic devices on the tray due to a temperature of the device exceeding a maximum rated temperature. While certain redundancies may be built into a computer data center, a server rack, and even individual trays, the failure of devices due to overheating can come at a great cost in terms of speed, efficiency, and expense.