In data storage systems, an array of independent storage devices can be configured to operate as a single virtual storage device using a technology known as RAID (Redundant Array of Independent Disks—first referred to as a ‘Redundant Array of Inexpensive Disks’ by researchers at University of California at Berkeley). In this context, ‘disk’ is often used as a short-hand for ‘disk drive’.
A RAID storage system includes an array of independent storage devices and at least one RAID controller. A RAID controller provides a virtualized view of the array of independent storage devices, and a computer system configured to operate with the RAID storage system can perform input and output (I/O) operations as if the array of independent storage devices of the RAID storage system were a single storage device. The array of storage devices thus appear as a single virtual storage device with a sequential list of storage elements. The storage elements are commonly known as blocks of storage, and the data stored within the data blocks are known as data blocks. I/O operations (such as read and write) are qualified with reference to one or more blocks of storage in the virtual storage device. When an I/O operation is performed on the virtual storage device, the RAID controller maps the I/O operation onto the array of independent storage devices. In order to virtualize the array of storage devices and map I/O operations the RAID controller may employ standard RAID techniques that are well known in the art. Some of these techniques are briefly considered below.
A RAID controller spreads data blocks across the array of independent storage devices. One way to achieve this is using a technique known as Striping. Striping involves spreading data blocks across storage devices in a round-robin fashion. When storing data blocks in a RAID storage system, a number of data blocks known as a strip is stored in each storage device. The size of a strip may be determined by a particular RAID implementation, or it may be configurable. A row of strips comprising a first strip stored on a first storage device and subsequent strips stored on subsequent storage devices is known as a stripe. The size of a stripe is the total size of all strips that comprise the stripe.
The use of multiple independent storage devices to store data blocks in this way provides for high performance I/O operations when compared to a single storage device, because the multiple storage devices can act in parallel during I/O operations. Performance improvements are one of the major benefits of RAID technology. Hard disk drive performance is important in computer systems, because hard disk drives are some of the slowest internal components of a typical computer.
Some hard disk drives are known for poor reliability, and yet hard disk drive reliability is critical because of the serious consequences of an irretrievable loss of data (or even a temporary inaccessibility of data). An important purpose of typical RAID storage systems is to provide reliable data storage.
One technique to provide reliability involves the storage of check information along with data in an array of independent storage devices. Check information is redundant information that allows regeneration of data which has become unreadable due to a single point of failure, such as the failure of a single storage device in an array of such devices. Unreadable data is regenerated from a combination of readable data and redundant check information. Check information is recorded as ‘parity’ data which may occupy a single strip in a stripe, and is calculated by applying the EXCLUSIVE OR (XOR) logical operator to all data strips in the stripe. For example, a stripe comprising data strips A, B and C would have an associated parity strip calculated as A XOR B XOR C. In the event of a single point of failure in the storage system, the parity strip is used to regenerate an inaccessible data strip. If a stripe comprising data strips A, B, C and PARITY is stored across four independent storage devices W, X, Y and Z respectively, and storage device X fails, strip B stored on device X would be inaccessible. Strip B can be computed from the remaining data strips and the PARITY strip through an XOR computation. This restorative computation is A XOR C XOR PARITY=B. This exploits the reversible nature of the XOR operation to yield any single lost strip, A, B or C. Of course, the previous XOR can be repeated if the lost data is the PARITY information.
In addition to striping (for the performance benefits of parallel operation) and parity (for redundancy), another redundancy technique used in some RAID solutions is mirroring. In a RAID system using mirroring, all data in the system is written simultaneously to two hard disk drives. This protects against failure of either of the disks containing the duplicated data and enables relatively fast recovery from a disk failure (since the data is ready for use on one disk even if the other failed). These advantages have to be balanced against the disadvantage of increased cost (since half the disk space is used to store duplicate data). Duplexing is an extension of mirroring that duplicates the RAID controller as well as the disk drives—thereby protecting against a failure of a controller as well as against disk drive failure.
Different RAID implementations use different combinations of the above techniques. A number of standardized RAID methods are identified as single RAID “levels” 0 through 7, and “nested” RAID levels have also been defined. For example:
RAID 1 uses mirroring (or duplexing) for fault tolerance; whereas
RAID 0 uses block-level striping without parity—i.e. no redundancy and so without the fault tolerance of other RAID levels, and therefore good performance relative to its cost; RAID 0 is typically used for non-critical data (or data that changes infrequently and is backed up regularly) and where high speed and low cost are more important than reliability;
RAID 3 and RAID 7 use byte-level striping with parity; and
RAID 4, RAID 5 and RAID 6 use block-level striping with parity. RAID 5 uses a distributed parity algorithm, writing data and parity blocks across all the drives in an array (which improves write performance slightly and enables improved parallelism compared with the dedicated parity drive of RAID 4). Fault tolerance is maintained in RAID 5 by ensuring that the parity information for any given block of data is stored on a drive separate from the drive used to store the data itself. RAID 5 combines good performance, good fault tolerance and high capacity and storage efficiency, and has been considered the best compromise of any single RAID level for applications such as transaction processing and other applications which are not write-intensive.
In addition to the single RAID levels described above, nested RAID levels are also used to further improve performance. For example, features of high performance RAID 0 may be combined in a nested configuration with features of redundant RAID levels such as 1, 3 or 5 to also provide fault tolerance.
RAID 01 is a mirrored configuration of two striped sets, and RAID 10 is a stripe across a number of mirrored sets. Both RAID 01 and RAID 10 can yield large arrays with (in most uses) high performance and good fault tolerance.
A RAID 15 array can be formed by creating a striped set with parity using multiple mirrored pairs as components. Similarly, RAID 51 is created by mirroring entire RAID 5 arrays—each member of either RAID 5 array is stored as a mirrored (RAID 1) pair of disk drives. The two copies of the data can be physically located in different places for additional protection. Excellent fault tolerance and availability are achievable by combining the redundancy methods of parity and mirroring in this way. For example, an eight drive RAID 15 array can tolerate failure of any three drives simultaneously. After a single disk failure, the data can still be read from a single disk drive, whereas RAID 5 would require a more complex rebuild.
As an example, RAID 5 enables single drive errors to be corrected. In an exemplary 14 drive RAID 5 system there can be 12 drives that store data, one drive to store parity, and one spare drive to which the information on a failed drive can be migrated during a RAID rebuild operation. However, a 14 drive RAID 10 system is partitioned as two sets of six data drives and one parity drive. As a result, it can be appreciated that it would require about twice the number of physical disks, as compared with RAID 5, to meet the same storage capacity requirement (the exact ratio is 2N/(N+1), where N is the number of data disks in the RAID 5 array). It is known that a RAID 5 configured system can be migrated to a RAID 10 system, but the reverse is not generally true.
The relatively low cost parallel Advanced Technical Attachment (ATA) disk drive, also sometimes referred to as an Integrated Drive Electronics (IDE) drive, and the serial ATA, or SATA disk drive, have been widely used for years in consumer Personal Computer (PC) equipment (both desktop and laptop). However, at least partially in response to the evolutionary increase in the data storage capacity of these disk drives a trend is developing to utilize the ATA and/or SATA drives in larger scale open and enterprise level disk-based storage systems, including RAID-based storage systems such as those briefly discussed above.
A problem that is created as a result of this trend relates to reliability, as the inherent reliability of the ATA and SATA drives, and the consequent Mean Time Between Failure (MTBF), can be significantly less that for other types of disk drives that have traditionally been used in large scale, enterprise-class disk storage systems. One result is that the failure rate and the subsequent maintenance costs for the disk storage system manufacturer can be greater than those that have traditionally been experienced where, in conventional systems, the maintenance fees charged to the user are typically a function of the total data storage capacity that is used.
U.S. Pat. No. 5,828,583 discusses ATA disk drives and the monitoring of certain attributes during operation in order to attempt to predict imminent failure of a disk drive.
U.S. Pat. No. 5,371,882 discusses a technique for predicting when a pool of shared spare disk drives, used in a large form factor disk drive memory having redundancy groups, will be exhausted by recording disk drive failure data and extrapolating past failure events to a spare disk drive exhaustion target date.
U.S. Pat. No. 6,411,943 B1 discusses in col. 57, line 46, to col. 58, line 10, an on-line service for billing a customer based on an amount of time and/or an amount of virtual disk storage that is read or written on behalf of the customer.