Cluster volume manager (CVM) software enables nodes of a cluster to simultaneously access and manage a set of disk drives under volume manager control (i.e., a shared pool of storage). The CVM typically presents the same logical view of the disk drives to all of the cluster nodes. In a shared disk CVM having a distributed configuration, each node must coordinate its own access to local disks and remote disks. Accordingly, if a node fails, the other nodes can still access the disks.
FIG. 1 shows a conventional distributed CVM environment 20 having a cluster 22 consisting of two nodes 24(A), 24(B) (collectively, nodes 24), and array-based disk storage 26 consisting of two disk arrays 28(A), 28(B) (collectively, disk arrays 28). A short communications medium 30(A) (e.g., a SCSI cable, a Fibre Channel fabric or iSCSI LAN fabric) provides connectivity between the node 24(A) and the disk array 28(A) in a localized manner. Similarly, a short communications medium 30(B) provides connectivity between the node 24(B) and the disk array 28(B). Additionally, a computer network 32 (e.g., a Metropolitan Area Network) connects the node 24(A) to the disk array 28(B), and further connects the node 24(B) to the disk array 28(A).
As further shown in FIG. 1, each disk array 28 includes a host adapter 34 and an array of disk drives 36. In particular, the disk array 28(A) includes a host adapter 34(A) and an array of disk drives 36(A). Similarly, the disk array 28(B) includes a host adapter 34(B) and an array of disk drives 36(B). The CVM software ensures identical content between the disk drives 36(A) of the disk array 24(A) and the disk drives 36(B) of the disk array 24(B) to form logical disk content appearing as disk mirror pairs 38 which are accessible by the cluster 22 of nodes 24.
During CVM setup, a CVM technician configures the node 24(A) to locally access the disk array 28(A) through the medium 30(A), and to remotely access the disk array 28(B) through a link of the computer network 32. Furthermore, the CVM technician configures the node 24(B) to locally access the disk array 28(B) through the medium 30(B), and to remotely access the disk array 28(A) through another link of the computer network 32. A dashed line 40 delineates the array-based disk storage 26 from the cluster 22 and the computer network 32, which the CVM technician must properly configure prior to CVM operation, i.e., proper administration of the CVM requires proper establishment of the links through the computer network 32.
Once CVM setup is complete, the nodes 24 are capable of performing mirrored writes to the array-based disk storage 26. For example, for the node 24(A) to write data to the array-based disk storage 26, the node 24(A) performs (i) a local write operation 42(L) on the disk array 28(A) through the medium 30(A), and (ii) a remote write operation 42(R) with the disk array 28(B) through the computer network 32 (see the dashed arrows in FIG. 1). For the node 24(B) to write data to the array-based disk storage 26, the node 24(B) performs (i) a local write operation on the disk array 28(B) through the medium 30(B), and (ii) a remote write operation with the disk array 28(A) through the computer network 32 in a similar manner. These contemporaneous local and remote writes by the nodes 24 enable data mirroring across an extended distance (e.g., across a campus or city) thus protecting the data against a single site failure (e.g., a building fire).
An example of CVM software which runs on a cluster of nodes and which performs data storage operations similar to those described above is offered by Veritas Software which has merged with Symantec Corporation of Cupertino, Calif. Another example of such CVM software is offered by Oracle Corporation of Redwood Shores, Calif.