The present invention concerns data centres, a method of cooling equipment in a data centre and also subject matter ancillary thereto. More particularly, but not exclusively, this invention concerns data centre buildings, for example provided in modular form. The invention also concerns a data centre building, a method of cooling electronic equipment in a data centre building, a method of constructing a data centre building, a method of extending an existing modular data centre building, a rack room building module for building a data centre, and a door arrangement for use within a building, for example a data centre. The invention also concerns a method of constructing a data centre in a space within a building.
A data centre is a late 20th Century development that has grown as a response to the increasing demand for computer processing capability and a recognition of the importance of IT in the place of every business and organisation today. Whereas smaller organisations have sufficient processing power with laptops, PCs and occasionally servers, larger organisations require higher capacity centralised processing to serve a wide range of needs and applications. A few years ago this capacity was supplied by large mainframe computers, but more recently the method used has been to provide data centres comprising many networked computer servers known as “blades” installed in racks enabling controlled and modular expansion of capacity. The racks also typically house telecommunications equipment such as routers to handle data flow between the computer servers and data flow between the data centre and the outside world.
Data centres can mirror the growth and business activities of successful companies. The growth of a data centre within in an expanding company may typically work as follows:                1. Initially the data centre may start as single rack of servers in an air conditioned room—sometimes referred to as a ‘data closet’.        2. As the organisation expands and along with it the number of IT racks employed, the closets become ‘Server Rooms’ or ‘IT Rooms’.        3. Eventually the number of racks and size of room expands, often to the point where a dedicated building or part of a building houses the IT. Whilst there is no strict definition of when the size of an IT facility becomes large, or sophisticated, enough to be termed a “data centre”, data centres are typically relatively large IT facilities providing robust and resilient IT facilities. Typically, there will be more than 50 servers (often many more) and at least some redundancy in the power supply powering the servers to ensure continuity of service.        4. As the company grows and/or becomes a multi-national organisation additional data centres will be built and sometimes numbers of these will be consolidated into ‘Super Data Centres’.        
Data centre facilities can require a floor space ranging from a few hundred square feet to a million square feet. The most prevalent size for a small data centre is five to ten thousand square feet with fifty to a hundred thousand square feet being the most common floor area requirement for a large data centre.
Data centres will typically have the ability to deliver applications spread across an organisation and/or supply chain and/or customers in differing geographical locations. There will typically be a dedicated mechanical and electrical (M&E) plant to deliver power, cooling and fire suppression with built-in redundancy with the aim of providing near continuous operation. The M&E plant may be located separately from the IT equipment to enable appropriately qualified engineers to work on either the M&E plant or the IT equipment independently of the other (thus improving security).
The IT industry has long recognised the criticality of central computing facilities and the need for energy efficient operations to control cost effectiveness. Current data centre technology is the summation of 30 years of innovation and engineering design thought and has come a long way in recent times. One key problem faced is how to cool a data centre effectively and efficiently. As explained above, a data centre can grow over time according to demand. As a result the following can happen:                1. A building is created, or a room within a building is allocated to IT. An electrical sub-system of conditioned (‘Clean’) power is run out to the IT room and the building's air conditioning system is adjusted to cool that room.        2. As the data room grows in scale, IT racks are laid out in rows. More IT products lead to more heat produced and so increased ventilation and air conditioning is required. Typically CRAC (Computer Room Air Conditioning) units are added to the end of the rows to provide the cooling. Air produced by these units is entrained through a raised floor and exits through floor grilles at the front of the IT rack rows. The IT products installed in the racks contain integral fans which draw the cooled air from the front across the circuitry and heat is exhausted via vents in the products to the rear. The separation created by these IT racks creates a ‘hot aisle’ into which air is expelled by the IT products in the racks and a ‘cold aisle’ from which cooler air is drawn into and through the IT products by their integral fans.        3. Dedicated M&E plant may be required. The M&E plant is sized based on an assessment of the future business requirements (over the next decade for example). Direct expansion (DX) or chilled water cooling plant is used to chill the air distributed within the data centre. Typically a ‘set-point’ is created to maintain the room at 21 Celsius, allowing for IT heat output and/or external ambient conditions.        
The way in which cooling is effected in purpose built data centres often results in a similar arrangement. Thus, the equipment in the data centre is prevented from over-heating by means of introducing cool air into the room. A typical arrangement of the prior art is shown schematically in FIG. 1 of the attached drawings. Thus, the data centre includes a rack room 1 defined by walls 2 in which two sets of racks 4 for IT equipment are accommodated. The IT equipment in the racks 4 generate heat, represented by dark arrows 6. The cooling of the IT equipment is achieved by introducing cold air into the room by means of air conditioning units, the cold air being represented by light arrows 8.
The drive for more efficient use of power has given rise to a need to make the cooling used in data centres more efficient, as cooling of equipment typically contributes significantly to the power used by a data centre. The efficiency of a data centre may be measured by means of a quantity known as the Power Usage Effectiveness (PUE), which is the ratio of the total energy used by a data centre, including IT equipment, and the energy consumed by the IT equipment only. If the power consumed by a data centre were 2.5 MW of which only 1.0 MW powers the IT equipment, then the PUE would be 2.5 (which represent an average PUE for a typical data centre). The closer to unity the PUE is, the more efficient the data centre is. It is currently estimated that the more efficient data centres currently installed operate at a PUE of about 1.6.
In recent years, approaches such as adding baffles across the top of the hot and/or cold aisles, with doors or further panels across the end of the aisle to contain entrainment of the air have been made, leading to debate about whether it is more effective to ‘contain’ the cold aisle or the hot aisle. A baffle arrangement is for example proposed in WO 2006/124240 (American Power Conversion Corporation).
Some recent configurations have utilised a new generation of ‘in-row’ cooling units in-between the racks, or, attached to the rear rack door. These bring the advantage of concentrated cooling but carry a high risk of refrigerant leakage. A slightly different arrangement, potentially suffering from similar problems is described in EP1488305. EP1488305 discloses a plurality of cabinets forming a data centre, each cabinet housing a rack of IT equipment and each cabinet comprising an equipment cooling unit within the cabinet to provide cooling.
The data centre industry is also suffering from being unable to meet demand sufficiently quickly and from reacting to the need to make such data centres energy and space efficient. IT capacity has grown at an exponential rate, doubling about every 18-24 months, in the last 30 years. Cooling capacity and space limits are frequently and repeatedly reached creating significant bottlenecks in IT businesses. Building a new data centre to alleviate such bottlenecks and meet demand is time consuming. Traditional methods of constructing data centres can take up to 2 years to completion. Also, data centres are physically becoming larger year on year because current design and engineering practice seeks to deal with heat issues by assuming low rack density and spreading IT thinly across large numbers of racks or large volumes of space.
The present invention seeks to provide an improved data centre and/or an improved method of, or means for, cooling a data centre. Additionally or alternatively, the invention seeks to provide a data centre and/or a method of, or means for, cooling a data centre that mitigates one or more of the above-mentioned disadvantages.