In conventional RAID storage systems, redundancy groups are established for any RAID type (i.e., RAID 0, RAID 5, “just a bunch of disks”(JBOD), etc.). Redundancy groups typically contain from one to sixteen drives and are chosen from the set of all available drives. The drives associated with a particular redundancy group need not be contiguous or in any special order. The stripe size for a particular redundancy group is configurable, and there may be an optional per drive logical block address (LBA) offset available. Redundancy groups may be linked to support mirroring to any other drive in the system, also known as N-way mirroring. Fixed cluster sizes enable mapping templates for the entire drive LBA space. In this manner, any physical drive LBA may be addressed by selecting the correct redundancy group and cluster to access. Furthermore, a single drive may belong to more than one redundancy group. Firmware resolves potential mapping conflicts by allocating clusters in one redundancy group that do not conflict with previously allocated clusters from another redundancy group.
FIG. 1 illustrates a flat system mapping table 100, including a system mapping table 110 which holds cluster descriptors. Each cluster descriptor includes a pointer 150a, 150b, . . . , 150c to a particular redundancy group and a cluster number 160a, 160b, . . . , 160c corresponding to the cluster allocated to that redundancy group. Each volume is defined by a set of sequential cluster descriptors. The volume LBA contains a volume map pointer and a cluster offset. The volume map pointer, contained in the upper bits of the volume LBA, gives an offset from the base of the flat volume map to the correct volume cluster descriptor. The volume cluster descriptor contains the redundancy group pointer and the corresponding cluster number. The redundancy group pointer points to the correct redundancy group descriptor in the redundancy group descriptor table. Thus, for a flat map, the redundancy group descriptor, the cluster number, and cluster offset are all fed into a mapping engine to arrive at the physical drive address.
In a flat volume map, each volume map entry must reside within a contiguous block of memory; it is therefore difficult to expand a particular volume map. Expanding a volume map requires moving all subsequent volume maps farther down the table to accommodate the new, larger map. Defragmentation is then required to realign the memory table. Large volume maps may require pausing volume activity during the move, which creates system latency. Additionally, volume map table manipulation may require large metadata update operations, which are processor intensive and adversely affect system performance.
U.S. Pat. No. 5,546,558, entitled, “Memory System with Hierarchic Disk Array and Memory Map Store for Persistent Storage of Virtual Mapping Information,” hereinafter the '558 patent, describes a data memory system that has a hierarchical disk array of multiple disks, a disk array controller for coordinating data transfer to and from the disks, and a RAID management system for mapping two different RAID areas onto the disks. The RAID management system stores data in one of the RAID areas according to mirror redundancy, and stores data in the other RAID area according to parity redundancy. The RAID management system then shifts or migrates data between the mirror and parity RAID areas on the disks in accordance with a predefined performance protocol, such as data access recency or access frequency. The data memory system also includes a memory map store embodied as a non-volatile RAM. The memory map store provides persistent storage of the virtual mapping information used by the RAID management system to map the first and second RAID areas onto the disks within the disk array. The RAID management system updates the memory map store with new mapping information each time data is migrated between mirror and parity RAID areas.
The method described in the '558 patent uses the conventional flat volume mapping approach and therefore does not offer a solution to the latency problems caused by manipulating the memory map store each time data migrates in the system. The '558 patent does not address defragmentation or system memory resource issues. Finally, the method described in the '558 patent does not offer a mechanism for reducing the amount of data required in the memory map stores.
Therefore, it is an object of the present invention to provide a method of mapping volume tables that allows expandable volume maps with no system performance impact.
It is another object of this invention to provide a method of expanding volume maps without requiring defragmentation.
It is yet another object of this invention to provide a method of mapping volume tables such that large volume maps may be reduced in size in order to improve system performance.
It is yet another object of this invention to provide a method of mapping volume tables such that minimal metadata updates are required and system performance is not adversely impacted.