The present invention relates to an adjustment for the performance of a logical volume within a storage apparatus, and more particularly to techniques for use with an inter-logical volume copy function for adjusting the performance of a destination logical volume in accordance with the performance of a source logical volume.
It should be first noted that the “adjustment for the performance of a volume” used herein refers to a modification to the configuration in a storage apparatus in which the volume is defined such that the volume is provided with the performance higher than required performance.
A technique called “RAID” (Redundant Array of Inexpensive Disks) is known for organizing two or more physical disks into a group to provide redundancy and improve the performance and reliability (see, for example, Jon William Toigo, “The Holy Grail of Data Storage Management,” Prentice Hall, 2000).
For using a storage apparatus which applies the RAID technique, two or more physical disks (physical storage media) within a storage apparatus are collected to define a logical storage apparatus called a “parity group.”
Then, logical storage areas called “logical volumes” (hereinafter simply called the “volumes” in this disclosure) are defined in the parity group, such that a client computer uses one of the volumes for utilizing the storage apparatus. In many storage apparatuses, two or more volumes can be defined in a single parity group.
FIG. 1 illustrates an exemplary definition for volumes.
In FIG. 1, a physical disk 1000, a physical disk 1010, a physical disk 1020, and a physical disk 1030 make up a single parity group 1100 in which a volume 1110 and a volume 1120 are defined.
In this configuration, the volume 1110 and volume 1120 defined in the parity group 1100 share the physical disk 1000, physical disk 1010, physical disk 1020, and physical disk 1030.
Such a configuration, in which different volumes share the same physical disks, is available not only when a used storage apparatus employs the RAID technique but also when two ore more volumes are defined within a single physical disk.
In a storage apparatus which applies several RAID techniques, storage areas called “logical disks” are defined in the parity group, rather than defining the parity group before volumes are defined in the parity group, so that a combination of the logical disks or a fragmental logical disk area can be defined as a volume. In several other storage apparatuses, physical disks are partially or entirely combined to directly define a volume without defining a parity group.
While there are several methods for forming volumes from physical disks as described above, they are all common in that different volumes share the same physical disks (see, for example, Mark Farley, “Building Storage Networks,” Network Professional's Library, Osborne).
In the following, an area comprising a combination of one or more of partial or entire physical disks is collectively called the “parity group,” and a logical storage area defined on the parity group is called the “volume.”
A certain storage apparatus has a function of copying data between volumes without intervention of a CPU in a computer which utilizes the storage apparatus.
Further, some of the aforementioned storage apparatuses have a function of writing the same data into a destination volume if data is written into a source volume in the inter-volume copy, even after all data has been copied between the volumes, until the two volumes are dissolved from their association. In this disclosure of the present specification, they are collectively expressed as “performing an inter-volume copy.”
Among functions of performing an inter-volume copy, a copy between volumes in the same storage apparatus is called a “snapshot” (see, for example, U.S. Pat. No. 5,845,295), while a copy between volumes in different storage apparatuses is called a “remote copy” (see, for example, U.S. Pat. No. 5,155,845).
In another system, while part or entirety of data included in a volume within a storage apparatus is specified, the data is cached in a cache memory within the storage apparatus (see, for example, JP-A-2001-175537).
In a further system, each volume in a storage apparatus is given a processing priority, such that requests from client computers are processed in accordance with the processing priorities (see, for example, U.S. Pat. No. 6,157,963).
In a further system, volumes within a storage apparatus are relocated to optimize the performance of each volume (see, for example, JP-A-2001-67187).
There is an approach for combining two or more computers to operate them as a single system to improve the performance and availability of the overall system. This approach is called “clustering,” and a system which employs the clustering is called a “clustering system” (see, for example, Richard Barker, Mark Erickson et al., The Resilient Enterprise—Recovering information services from disasters, VERITAS Vision 2002 distributed book, 2002, and Evans Marcus, Hal Stern, Blueprints for High Availability, Wiley Computer Publishing, 2002).
Cluster systems are generally classified into a load balance type and a failover type. The load balance type cluster system distributes service applications among all servers so that the servers process the service applications. The failover type cluster system in turn divides a group of servers into a server for processing service applications (active server), and a server (standby server) which is normally in standby and takes over (fails over) the processing if the active server fails in the operation.
The failover type cluster systems include a system which has an active server and a standby server that share volumes, and a system which has servers that mutually mirror volumes (have copies of the same data).
FIG. 2 illustrates an exemplary configuration of a failover type cluster system which mirrors volumes.
A computer 2000 and a computer 2100 are connected to a communication device 2400 and a communication device 2500, respectively. The communication device 2400 and communication device 2500 in turn are connected to a storage apparatus 2200 and a storage apparatus 2300, respectively. The storage apparatus 2200 and storage apparatus 2300 are interconnected through a communication path 2600. A volume 2210 is defined in the storage apparatus 2200, while a volume 2310 is defined in the storage apparatus 2300.
The computer 2000 processes service applications using the volume 2210, and utilizes a remote copy function to copy the volume 2210 to the volume 2310 through the communication path 2600. In this state, data written into the volume 2210 is automatically written into the volume 2310.
In the system illustrated in FIG. 2, when the computer 2000 fails, the computer 2100 takes over (fails over) the processing of the service applications. In this event, the computer 2100 uses the volume 2210 or volume 2310.
On the other hand, when the storage apparatus 2200 fails, either the computer 2000 or computer 2100 processes the service applications using the volume 2300.
However, generally, when the computer 2000 and storage apparatus 2200 are installed in a site geographically remote from a site in which the computer 2100 and storage apparatus 2300 are installed, the computer 2100 processes the service applications using the storage apparatus 2300 if the computer 2000 or storage apparatus 2200 fails.
Since the cluster system illustrated in FIG. 2, fully duplicates the computers, communication devices, communication paths, and volumes, the overall system can continue the operation even if any one of the devices fails within the system.
FIG. 3 illustrates an example in which another volume is defined on the same parity group as the volume 2310 in the cluster system illustrated in FIG. 2.
A computer 2000 and a computer 2100 are connected to a communication device 2400 and a communication device 2500, respectively. The communication device 2400 and communication device 2500 in turn are connected to a storage apparatus 2200 and a storage apparatus 2300, respectively. A computer 3000 is connected to a communication device 3600 which in turn is connected to a storage apparatus 2300. The storage apparatus 2200 and storage apparatus 2300 are interconnected through a communication path 2600. A parity group 3400 is defined in the storage apparatus 2200, while a parity group 3300 is defined in the storage apparatus 2300. Further, a volume 2210 is defined in the parity group 3400, while a volume 2310 and a volume 3310 are defined in the parity group 3300.
Here, the computer 2000 processes service applications using the volume 2210 which is copied to the volume 2310 using a remote copy function. The computer 2100 is in standby for processing the service applications using the volume 2310 in the event the computer 2000 fails. The computer 3000 in turn processes service applications using the volume 3310.
In the cluster system illustrated in FIG. 3, when a fault causes computer 2000 to fail over so that the computer 2100 takes over the processing using the volume 2310, the performance of the overall system is degraded if the volume 2310 has a performance lower than the volume 2210. This may occur when a load is charged on the volume 3310 which shares the physical disk with the volume 2310.
To avoid the problem mentioned above, the volume 2310 may be located on a different parity group from the volume 3310 on which a load is charged when the cluster system of FIG. 3 is built. However, it is not always possible to avoid the degradation in the performance of the overall system after the failover due to the influences exerted by an increase and decrease in a load on each volume resulting from a change in the trend of accesses to the storage apparatuses of the service applications executed by the computer 3000 and the service applications executed by the computer 2000, a modification in configuration for optimizing the performance of each volume by the relocation of the volumes described in the section of prior art, and the like.