1. Field of the Invention
The present invention relates generally to a sensor system for monitoring conditions in a data center.
2. Related Art
A data center may be defined as a location, e.g., a room, that houses numerous printed circuit (PC) board electronic systems arranged in a number of racks. A standard rack may be defined as an Electronics Industry Association (EIA) enclosure, 78 in. (2 meters) high, 24 in. (0.61 meter) wide and 30 in. (0.76 meter) deep. Standard racks may be configured to house a number of PC boards, e.g., about forty (40) PC server systems, with some existing configurations of racks being designed to accommodate up to 280 blade systems. The PC boards typically include a number of components, e.g. processors, micro-controllers, high speed video cards, memories, and the like, that dissipate relatively significant amounts of heat during the operating of the respective components. For example, a typical PC board comprising multiple microprocessors may dissipate approximately 250 W of power. Thus, a rack containing forty (40) PC boards of this type may dissipate approximately 10 KW of power.
The power required to remove the heat dissipated by the components in the racks is generally equal to about 10 percent of the power needed to operate the components. However, the power required to remove the heat dissipated by a plurality of racks in a data center is generally equal to about 50 percent of the power needed to operate the components in the racks. The disparity in the amount of power required to dissipate the various heat loads between racks and data centers stems from, for example, the additional thermodynamic processing needed in the data center to cool the air.
In one respect, racks are typically cooled with fans that operate to move cooling fluid, e.g., air, across the heat dissipating components; whereas, data centers often implement reverse power cycles to cool heated return air. The additional work required to achieve the temperature reduction, in addition to the work associated with moving the cooling fluid in the data center and the condenser, often add up to the 50 percent power requirement. As such, the cooling of data centers presents problems in addition to those faced with the cooling of racks.
Conventional data centers are typically cooled by operation of one or more air conditioning units. The compressors of the air conditioning units typically require a minimum of about thirty (30) percent of the required cooling capacity to sufficiently cool the data centers. The other components, e.g., condensers, air movers (fans), etc., typically require an additional twenty (20) percent of the required cooling capacity. As an example, a high density data center with 100 racks, each rack having a maximum power dissipation of 10 KW, generally requires 1 MW of cooling capacity.
Air conditioning units with a capacity of 1 MW of heat removal generally require a minimum of 30 KW input compressor power in addition to the power needed to drive the air moving devices, e.g., fans, blowers, etc. Conventional data center air conditioning units do not vary their cooling fluid output based on the distributed needs of the data center. Instead, these air conditioning units generally operate at or near a maximum compressor power even when the heat load is reduced inside the data center.
The substantially continuous operation of the air conditioning units is generally designed to operate according to a worst-case scenario. That is, cooling fluid is supplied to the components at around 100 percent of the estimated cooling requirement. In this respect, conventional cooling systems often attempt to cool components that may not need to be cooled. Consequently, conventional cooling systems often incur greater amounts of operating expenses than may be necessary to sufficiently cool the heat generating components contained in the racks of data to the components at around 100 percent of the estimated cooling requirement. In this respect, conventional cooling systems often attempt to cool components that may not need to be cooled. Consequently, conventional cooling systems often incur greater amounts of operating expenses than may be necessary to sufficiently cool the heat generating components contained in the racks of data centers.
Consequently, the inventors have developed systems for collecting temperature data from data centers so that the computing equipment can be cooled based on actual cooling requirements. Such systems can include large numbers of temperature sensors to provide detailed temperature information from many different locations within the data center, allowing a cooling system to provide cooling only where it is needed, and only to the extent actually needed by the electronic systems.
Unfortunately, the traditional approach to sensing large numbers of temperatures distributed throughout a large volume is costly and cumbersome. Typically thermistor or thermocouple sensors are used. These sensors require that a continuous wire be installed from the sensor location to a central instrument where temperature readings are made. Each sensor must be connected to the instrument with its own wire. The wire involved must meet special manufacturing and installation characteristics to provide an accurate result. This results in a very costly and cumbersome installation with large bundles of expensive wire spread throughout the measurement environment. The cost of these methods can be quite high.
A typical temperature data acquisition system could include a central datalogger which contains a switch matrix and analog-to-digital converter, with cold-junction compensation for thermocouple input. Such systems are available that are capable of measuring 60 thermocouple temperature sensors, though this is not a limit. Some systems expand to accommodate a thousand sensors. Unfortunately, this sort of system requires installation of a continuous thermocouple wire from each measurement point to the instrument. Thermocouple wire is a specialized material that must be installed to special requirements for accurate results. Each conductor is a different material, and all junctions must maintain material integrity from sensor to instrument. This requires special connectors and installation procedures.
Industrial versions of this type of equipment are also available, and present similar disadvantages. One such concept is the use of separate “transmitters” at each measurement point that convert from a thermocouple to a network protocol at each measurement point. This is a very costly approach, and requires that power be supplied at each measurement point. Additionally, any installation using traditional techniques would require custom mounting of sensors in the field. This can lead to a visually unappealing installation. Field installation also often results in wire routing that leaves sensor wire vulnerable to damage during operation of the data center.
Accordingly, it has been recognized that it would be advantageous to develop a distributed network of sensors that is simple to install on new or existing equipment rack systems, and that provides a low cost sensor assembly for deploying multiple sensors over a widely distributed area.