The present invention relates generally to climate control of electrical systems and more particularly to apparatus, methods, and systems for cooling electronic components within enclosures.
Industrial data centers have traditionally been designed to accommodate relatively large mainframe computer systems. These systems include stand-alone hinged cabinets containing central processing units, tape guide systems, disk drives, printers, control consoles, and the like. When assembled within a data center, the systems have required a relatively large amount of floor area within a given building, as well as a carefully controlled environment. Control over that environment typically requires a dedicated, sealed computer room which is serviced by corresponding dedicated air-conditioning systems. The residents of these rooms, typically computers with one or more processors, generate substantial heat during their operation. Excess heat is undesirable in this environment, as the processors work more efficiently and with lower failure rates at lowered temperatures. Because of the extensive amount of electrical interconnection required both for power supply and system communication, these computer rooms typically contain raised floors formed of tiles supported upon frames beneath which the complex cable networks can be laid.
Generally, the provision of such computer rooms has represented a substantial financial investment. Further, the air distribution through a raised-floor plenum and air conditioning represent a significant investment, and a cooling challenge. Properly cooling these computer rooms, and their delicate residents, has proved one of the greatest challenges for designing and constructing the rooms.
In the recent past, industry has introduced processing systems employing modern, modular electronics and with supporting components permitting their rack mounted installation. Such modularized designs provide for substantial flexibility in accommodating varying processing demands.
Current high compute density data centers may be characterized as consisting of thousands of racks, each with these modular computing units. The computing units may include multiple microprocessors, each dissipating approximately 250 W of power. The heat dissipation from a rack containing such computing units typically may exceed 10 KW. For example, a data center with 1,000 racks, spread over 30,000 square feet, requires about 10 MW of power for the computing infrastructure. Energy required to dissipate this heat will be about an additional 4 MW. This may add up to millions of dollars per year to power the cooling infrastructure for the data center.
A typical microprocessor system board contains one or more CPUs (central processing units) with associated cache memory, support chips, and power converters. The system board is typically mounted in a chassis containing mass storage, input/output cards, power supply and cooling hardware. Several such systems, each with maximum power dissipation of up to 250 W, are mounted in a rack. The rack used in current data centers is an Electronics Industry Association (EIA) enclosure, 2 meters (78 in) high, 0.61 meter (24 in) wide and 0.76 meter (30 in) deep. More information regarding standard EIA enclosures can be found using the Electronics Industry Alliance website at www.eia.org. A standard 2 meter rack has an available height of 40 U, where U is 44.4 mm (1.75 in). Recent market forces have driven production of 1 U high systems. Therefore, a rack can accommodate 40 of these systems. If the power dissipation from each system is 250 W, a single rack in a data center can be assumed to dissipate 10 KW.
The purveyor of computing services, such as an internet service provider, may install these rack based systems in a data center. In order to maximize the compute density per unit area of the data center, there is tremendous impetus to maximize the number of systems per rack, and the number of racks per data center. If 80 half U systems were accommodated per rack, the power dissipation will reach 20 KW per rack for a system assumed to dissipate 250 W.
With the racks fully loaded, the equipment may exhibit a significantly high heat load. Moreover, the present invention identifies that the infrastructure of today should be able to sustain the power dissipation and distribution of tomorrow. The power dissipation from computer components and systems, especially the high power density of microprocessors of the future, will require cooling solutions with unprecedented sophistication. Similarly, the units will call for an uninterrupted power supply load capacity. These requirements, particularly when more than one component of a system is utilized (a typical case) generally cannot be accommodated by the in-place air-conditioning system of a building nor its in-place power capabilities.
The general approach has been a resort to a conventional sealed computer room, an approach which essentially compromises many of the advantages of this modular form of processing system. Such computer room installations further may be called for in locations which are not owned or where the user of the systems otherwise does not have complete control over the power and air-conditioning of the system. A failure or shutdown of the cooling system can lead to computer malfunction, failure, or even permanent damage, having costly consequences for the user. In conventional data centers, where air is typically the medium that transfers heat to the distant air conditioning units, large temperature gradients result in expensive cooling inefficiencies. Thus, even when these systems operate as intended, they are largely inefficient.
The user is called upon to find a technique of enriching total cooling conditioning capacities at a minimum of expense while facilitating ease of manufacture, increasing capacity and serviceability, and decreasing total space.
In one aspect of the present invention, there is provided an apparatus for housing electronic components. The apparatus includes an enclosure, one or more mounting boards mounted to the enclosure wherein the mounting boards have the electronic components mounted thereto. The apparatus further includes a supply plenum having one or more outlets directed toward the mounting boards. One or more heat exchanging devices are mounted to the enclosure, and one or more blowers are also mounted to the enclosure. The blowers are fluidically interposed a heat exchanging device and supply plenum to move air from the heat exchanging device, through the plenum, past the mounting boards.
In another aspect of the present invention, a method of cooling electronic components is provided. First, mounting boards having the electronic components mounted thereto are provided. Second, the mounting boards are docked within an enclosure. Finally, the electronic components are cooled by exchanging heat between one medium and air within the enclosure to produce cooled air using a heat exchanging device, and by moving the cooled air into contact with the electronic components to cool the electronic components.
In yet another aspect of the present invention, another method of cooling electronic components is provided. First, an enclosure is provided to house the electronic components. Second, temperature is sensed in at least one location within the enclosure. Third, heat is exchanged between one medium and air within the enclosure to produce cooled air using a heat exchanging device. Fourth, the cooled air is moved into contact with the electronic components to cool the electronic components. Finally, the steps of exchanging heat and moving the cooled air are adjusted in response to sensing the temperature.
In still another aspect of the present invention, there is provided a system for cooling electrical components. The system includes several enclosures, a common coolant supply line, and a control chip. Each enclosure includes one or more mounting boards mounted thereto, wherein each mounting board has electronic components mounted thereto. Each enclosure further includes a supply plenum having one or more outlets directed toward the mounting boards. One or more heat exchanging devices are mounted to each enclosure and coolant supply lines are fluidically connected to the heat exchange devices. One or more valves are fluidically connected to the heat exchange supply line for valving fluid flow to the heat exchanging devices. One or more blowers are mounted to each enclosure, wherein the blowers are fluidically interposed a respective heat exchanging device and the supply plenum to move air from the heat exchanging device, through the supply plenum, past the mounting boards, and back to the heat exchanging device. One or more variable outlet devices is positioned in fluidic communication with the outlet of the supply plenum, and one or more temperature sensing devices are mounted within each enclosure. The common coolant supply line is fluidically connected to each individual coolant supply line of each of the enclosures. Finally, one or more control chips are mounted to one or more of the enclosures, wherein the control chip is electronically connected to and receives input from the temperature sensing device. In turn, the control chip is electronically connected to and transmits output to the valve, to the blower, and to the variable outlet device to vary the performance of each enclosure individually and to vary the performance of all of the enclosures collectively as a system.