The invention relates to a cooling circuit system, in particular to be used in a data center. Moreover, the invention relates to a controlling method for a cooling circuit system. Electric or electronic components to be cooled, which are located on a double bottom, are cooled with the aid of a cooling system, wherein fans blow cooled air from the double bottom, as described in US 2005/0075065 A1.
Many organizations or enterprises use their own data center to execute their computer-intensive working steps. Data centers of all magnitudes are individually planned in these days, where necessarily continually recurring actions are repeated for each new data center, incurring an unnecessary cost already in a planning stage. In order to reduce manufacturing requirements on the spots, data centers or parts of data centers are pre-assembled according to user's needs and transported to the place of destination, so that operational readiness can be ensured within relatively short terms. Data centers of this type are also referred to as modular data centers.
Normally, data centers include a large number of servers, network and computer equipment to process, to store and to exchange data according to need. Typically many server racks are installed within a computer area in which servers and associated equipment is accommodated.
According to the size of a data center, a large amount of electric energy may be required to operate the facilities. Generally, a relatively high voltage is fed in which is down transformed to a lower voltage. A network comprising cabling, terminals and energy distribution is used to deliver the energy having the lower voltage to numerous specific components within the data center. Those components produce waste heat in a significant scale which must be dissipated, so that air conditioning is required.
Evaluation assessment is the power use effectiveness (PUE) defined by “The Greed Grid” consortium which represents the ratio of energy input of the computers to total energy consumption of a data center. Ratios below 1.3 are seen to by highly efficient.
Another problem in a data center is physical protection against e.g. fire, smoke, water and other risks, which could impair the servers in the data center or could destroy them more or less.
The amount of computing capacity in a data center may vary quickly, when business conditions change. Often, there is a need of increased computing capacity at a place, where existing components must be considered when planning the expansion as desired by the client. Extending existing capacity is, however, resource intensive and takes a long time. Cables must be laid, racks assembled and air condition systems must be built. Additional time is spent to perform inspections and to call for certification. Scaling is thus an important argument for the client already when newly constructing a data center.
The company Silicon Graphics International Corp., Fremont, Calif., U.S.A., distributes a modular data center made up in container architecture, in which up to four server racks are included in one unit, wherein the data center may be scaled up to 80 racks. The air conditioning and cooling system operates by using intelligent fans and a three-step fan evaporation cooling system with high energy efficiency.
The idea of a modular data center is also subject of WO 2011/038348 A1. A modular computing system for a data center includes one or more data center modules having server systems which are organized in racks. A central electric module supplies electric energy to the data center modules. Cooling modules using air are individually associated to each data center module and include optionally a fan. Air-conditioning uses pre-cooled air, which is introduced into the data center modules. Further, a fire protection system is included, which seals off the electrical modules in case of a fire. The modules of a system may be pre-assembled, wherein functional elements and structural elements are included. Those may be transported as a unit and may be quickly mounted at a desired place.
For all known data centers, also for modular data centers, a disadvantage of individual planning in particular when upgrading existing data centers exists, wherein efficiency considerations can be introduced only insufficiently.
Servers and racks of a data center are, as is the technique of the infrastructure, such as current and climate, built up on a second floor, mounted on a base bottom, all together named as a double bottom. Presently, the main objective when air-conditioning a data center is the guaranteed supply with cold air. For example, circulating air climatic systems are used which blow cold air into the space between the bottoms, wherein this cold air is sucked in by the servers through particularly designed plates from the double bottom. Hot outlet air is output at the rear side of the racks and is sucked in by the circulating air climatic system which cool down the air and supply the cooled air in a circuit to the servers again though the double bottom. Energy efficiencies play a subordinate role. Often, the double bottom accommodates the installation of energy supply and network cabling. A double bottom system is for example described in WO 2009/109296 A1.
DE 20 2009 015 124 U1 describes a system for cooling electric and electronic components and module units in device cabinets which are for example disposed in a data center. It may be provided that a cooling unit with fans is positioned in a double bottom below a rack, where the fans are separated in terms of air circulation. Each fan is associated with a means for preventing recirculation which is, with respect to the air flow direction, downstream to the relevant fan.
US 2005/0075065 A1 which was mentioned above and US 2004/0065097 A1 rely on a cooling circuit system which circulates the entire air within the data center.