Computer users often focus on the speed of computer microprocessors (e.g., megahertz and gigahertz). Many forget that this speed often comes with a cost—higher electrical consumption. For one or two home PCs, the extra power may be negligible when compared to the cost of running many other electrical appliances in a home. But in data center applications, where thousands of microprocessors may be operated, electrical power requirements can be very important.
Power consumption brings a second expense also—the cost of removing heat generated by the consumed electricity. That is because, by simple laws of physics, all the power has to go somewhere, and that somewhere is, for the most part, conversion into heat. A pair of microprocessors mounted on a single motherboard can draw 200-400 watts or more of power that is turned into heat. Multiply that figure by several thousand (or tens of thousands) to account for the many computers in a large data center, and one can readily appreciate the amount of heat that can be generated. It is much like having a room filled with thousands of burning floodlights.
Moreover, there are many benefits to placing computing components in as compact a space as possible. Such arrangements can permit for faster processing speeds. Also, fewer components may be needed, such as when multiple processors are mounted on a single motherboard. In addition, such systems can be more reliable because they involve fewer connections and components, and can be produce in a more automated fashion. However, when systems are more compact, the same amount of heat may be generated in a much smaller space, and all of the heat may need to be removed from the small space.
Heat removal can be important because, although microprocessors may not be as sensitive to heat as are people, increases in heat generally can cause great increases in microprocessor errors. In sum, such a system may require electricity to heat the chips, and more electricity to cool the chips.