A traditional storage system or storage array (herein also referred to as a “disk storage array”, “disk array”, or simply “array”) is a collection of hard disk drives operating together logically as a unified storage device. Storage arrays are designed to store large quantities of data. Storage arrays typically include one or more storage array processors (SPs), for handling both requests for allocation and input/output (I/O) requests. An SP is the controller for and primary interface to the storage array.
The performance of storage arrays may be characterized by the arrays total capacity, response time, and throughput. The capacity of a storage array is the maximum total amount of data that can be stored on the array. The response time of an array is the amount of time that it takes to read data from or write data to the array. The throughput of an array is a measure of the amount of data that can be transferred into or out of (i.e., written to or read from) the array over a given period of time.
It will be known to those skilled in the art that storage arrays may use cache in order to improve the performance of the data storage system. The cache which may be implemented using a fast, volatile memory, such as RAM (random access memory), particularly dynamic RAM (DRAM), can store data enabling better performance. For example, the data storage array may temporarily cache data received from a host and destage the cached data at different times onto the physical disk drives. This technique is known as write-back caching. However, the problem with DRAM cache is that the storage capacity is low and the cost is high.
A further problem that may limit the performance of a storage array is the performance of each individual storage component. For example, the read access time of a disk storage array is constrained by the access time of the disk drive from which the data is being read. Read access time may be affected by physical characteristics of the disk drive, such as the number of revolutions per minute of the spindle: the faster the spin, the less time it takes for the sector being read to come around to the read/write head. The placement of the data on the platter also affects access time, because it takes time for the arm to move to, detect, and properly orient itself over the proper track (or cylinder, for multihead/multiplatter drives). Reducing the read/write arm swing reduces the access time. Finally, the type of drive interface may have a significant impact on overall disk array storage. For example, a multihead drive that supports reads or writes on all heads in parallel will have a much greater throughput than a multihead drive that allows only one head at a time to read or write data.
It will be known by those skilled in the art that in order to deal with at least some of these problems there has been an increase in the use of semiconductor solid state drives (also known as a solid state disks or SSDs) which may use flash memory as a storage device. Thus, in at least some cases there is a trend towards the use of SSDs as a storage device instead of a disk. Features that can make SSDs preferable as storage devices are, for example, a fast access rate, high throughput, a high integration density, and stability against an external impact. SSDs can move much larger amounts of data and process far more I/O requests, per time period, than conventional disks. This allows users to complete data transactions much more quickly.
In view of the above, it is common for data storage systems to have a combination of disks and SSDs for storing data. It is also common for high performance applications to use the SSDs as the performance capabilities of the SSDs are superior to those of the disks.