Mass storage systems, particularly RAID systems, and working methods suitable for operation thereof, are known. In particular, “A Case for Redundant Arrays of Inexpensive Disks (RAID)” by David A. Patterson, Garth Gibson and Randy H. Cats, published in the International Conference on Management of Data, 1988, Chicago, discloses the concept, since known generally as the RAID system, of distributed storage of data over several hard disks that are physically independent of each other. A user of the mass storage system no longer notices the physical separation of the individual mass storage devices, but stores data in a logical or virtual file system, the associated data of which is archived physically on one or a plurality of mass storage devices.
The use of RAID systems offers a number of advantages in general. In particular, extensive data can be distributed over several physical hard disk drives. This has the advantage inter alia that read/write accesses can be accelerated, since the data transfer rates of several physical mass storage devices are available. Furthermore, a redundancy of the data and hence a safeguard against the failure of an individual hard disk drive can be achieved by archiving data simultaneously on several physical mass storage devices. The advantages can also at least partially be combined with one another in different operating modes, which are known generally as RAID levels.
One disadvantage that accompanies the above-mentioned RAID approach is that the provision of a multiplicity of mass storage devices operated independently of one another necessitates an increased energy requirement. Even the provision of just a few hard disk drives, for example, four hard disk drives each having a capacity of 250 GB, compared to the provision of a single hard disk drive having, for example, a capacity of 1 TB, requires a not inconsiderable increased demand for electrical energy owing to the control electronics that are additionally required and the drive for the disk pack that is present in each case. This additional energy requirement increases further if, as described above, redundant data retention is desired. Hence further, additional hard disk drives are used. In particular in the commercial environment, in which nowadays very large amounts of data are held in centralized data centers, the problem is exacerbated. If several hundred or, if necessary, even several thousand hard disk drives are being used, in addition to the power required to operate them, further expense is incurred to cool them during operation.
Measures to reduce energy consumption of individual disk drives are known. For example, it is known to use an operating system or an internal hard disk controller to deactivate the drive for a pack of rotating storage media temporarily, if a drive is not currently being accessed. However, the disconnection known per se contains a series of disadvantages. First, upon renewed access to a mass storage device that has been deactivated in this manner, the disk pack must first be re-accelerated before the desired access can be carried out, which leads to longer access times. Second, the repeated switching off and re-acceleration of the disk pack results in increased mechanical wear and, hence, a shorter service life of the mass storage device. Therefore, at least in data centers in which a multiplicity of files are held for a multiplicity of users in extensive mass storage systems, such energy-saving options are not normally, or only seldom, used.
Another problem is describing a working method for a mass storage system and a mass storage system, which for substantially the same performance has a reduced energy consumption compared to known mass storage systems, or which for the same energy requirement is able to provide a better performance. From the point of view of its users, the mass storage system should preferably behave like a conventional mass storage system.