An established technology that provides large disk capacity with very high reliability is a redundant array of independent disk drives or RAID, also known as disk drive array technology. RAID uses multiple physically separate disk drives which act as a single drive and are all accessed through a single array controller. For data reliability, a parity block is derived from related data blocks of the various disk drives, permitting the rebuilding of data from one disk drive that fails by processing the data of the other drives along with the parity block. Data may be stored as a sector, a segment, a stripe, or a volume across the disk drives of a RAID system. This enables allows parallel read/writes by the disk drives and thereby increases data transfer speed.
In the disk array prior art, cache memory resides within a single contiguous memory. As I/Os are scheduled into this memory, hardware state machines perform exclusive-or (XOR) operations on the data to generate a parity block, and deposit the result in a pre-specified area of the contiguous memory. This causes basic problems of absolute memory system size due to design restrictions on the individual memory system, as well as limits of DRAM technology. It also creates issues of bandwidth, since overall bandwidth is limited by the memory size. If multiple memory complexes are used, cache block allocation across the memory systems causes XOR operations to cross the memory complexes, and creates inefficient transfers that decrease performance. In addition, bandwidth is also restricted due to the fact that I/O operations can be concentrated in one or two areas of the memory.
A memory system may include two or more memory complexes. A memory complex in a disk array controller contains cache memory and associated hardware for direct memory access (DMA) and exclusive OR operations (XOR). The cache memory is sometimes referred to as a cache pool. If multiple cache pools are used, the XOR operation could be performed on the data no matter where it existed; however, this makes the hardware that performs the XOR very complex and expensive.
FIG. 1 illustrates a prior art arrangement in which multiple cache pools divided up a single data set.
Therefore, it would be desirable to provide a method for maintaining related data within a single memory complex so as to avoid thrashing and enhance speed of operation.