It was well known to backup critical data to nonvolatile disk storage for recovery purposes. There are a variety of known techniques to create backup or secondary copies of data. A known data “mirroring” technique involves physically copying or mirroring a set of disk volumes from a primary disk storage subsystem from which it is used during normal operations to secondary and/or tertiary disk storage subsystem(s). Typically, the primary storage subsystem resides at a primary (geographic) site and the secondary and/or tertiary storage subsystems reside at a secondary and/or tertiary (geographic) site, and the data mirroring occurs via a network between the different sites. To implement data mirroring, a data mirroring management program at the primary site maps pairs of corresponding disk volumes in the primary and secondary storage subsystems (or maps triplets of corresponding disk volumes in the primary, secondary and tertiary storage subsystems if all provided). The mapping indicates where data created and stored in the primary storage subsystem is mirrored to the secondary storage subsystem (and tertiary storage subsystem, if provided). A disk “volume” can be a named logical disk drive that holds computer data that can be made up of one or more physical disk drives. After such a mapping, data updates made at the primary storage subsystem are automatically copied to the secondary storage subsystem (and tertiary subsystem, if provided).
The data mirroring management program needs the following information for both the primary and secondary disk storage systems (and tertiary disk storage subsystem, if provided) to support a data mirroring process, i.e. establish a mapping between the primary and secondary storage systems and enable physical mirroring of data: storage system serial numbers, network identifications, network connection information, and other internal identifiers of subsystem IDs, internal addresses, and Logical Control Unit IDs. The following is other configuration information for the primary and secondary storage systems that may also be needed: Worldwide Nodenames, pooling Assignment numbers and Channel Connection Addresses. Presently, an administrator manually collects the foregoing information by paper documentation and querying the disk subsystem. Based on this information, the administrator creates commands at the primary system to define which disk volumes in the primary subsystem are mirrored to which disk volumes in the secondary storage subsystem (and tertiary storage subsystem, if provided), record the mirror information in a mirroring table and initiate the mirroring. The commands include the following:    Peer-Peer Remote Copy (PPRC) commands;
CESTPATH: Establish a PPRC physical link connection between a primary side Logical Control Unit(LCU) and a remote side LCU
CESTPAIR: Establish a PPRC mirror pair between a primary side volume and a remote side volume    Global Mirror commands;
RSESSION-START: Start command to establish the mirror session in a global mirror
RSESSION-PAUSE: Pause the mirror session so it does not attempt synch point
RSESSION-RESUME: Resume the mirror to take a synch point
RVOLUME: Define a volume to the global mirror that has already been established with a PPRC CESTPAIR command
RSESSION-DEFINE: Add a Logical Control Unit (LCU) to the global mirror configuration    Flashcopy Commands:    FCESTABL: Establish a flashcopy relationship between the secondary PPRC volume at the remote side and a tertiary volume at the remote side.Also, disk volumes occasionally need to be added to or removed from the primary and/or backup system. In such a case, the administrator needs to define and execute new commands to reflect the new or removed disk volumes.
An existing IBM Global Mirror Executive (“GME”) program implements data mirroring in IBM servers running an IBM z/OS operating system. The IBM GME program provides the following features:
Initiates and Monitors the Entire Global Mirror Session.
Detects and adds new production volumes to the mirror with no manual intervention.
Operates in either “Modeling” mode or “Execution” mode.
Detects problems with the mirror and fixes them (if possible).
Reports via alert message any problems.
Detects and corrects (if possible) configuration problems.
Generates information and commands required for recovery.
Provides management reporting on data synchronization.
The IBM Global Mirroring Executive program operates as follows: GME program operates in an IBM Z/OS MVS system, running as a ‘started task’. At startup, GME program reads in configuration files, a parameter options file, and a dynamically built ‘current disk volume’ file. GME program builds internal tables and checks to determine if all volumes that are supposed to be part of the mirror are actually included in the mirror as it is currently running. GME program will then take the following actions as required to ensure the mirror is functional: check the status of each mirrored volume pair, check the status of the secondary/tertiary volume pairs, and check the status of the mirror structure. Thus, an administrator must manually define the pairs (or triplets) of disk volumes in a primary storage subsystem and a secondary storage subsystem (and a tertiary storage subsystem if provided). The following are existing commands and parameters input by an administrator to define the initial data mirroring and subsequently change the data mirroring to reflect changes to existing disk volumes in either the primary or backup system: CESTPATH, CESTPAIR, RSESSION START, and RSESSION DEFINE, RVOLUME. The IBM GME program also monitors the data mirroring, responds to events such as New Volumes, removed volumes, offline paths, invalid states of volume pairs, configuration and mismatches by taking the required action appropriate for each event. This could be adding the new volume to the mirror, removing a volume address from the mirror (in case the volume no longer exists), bringing paths online, etc. IBM GME also generates a recovery solution, i.e. generating required sets of commands and data to recover at the secondary location in the event of an outage.
An existing IBM eRCMF solution functions mainly as an “open systems” mirroring management tool. It will establish a mirror based on command input from an administrator. Its main function is to process the ‘revert commands’ at the remote DR site in the event of a primary site failure.
An object of the present invention is to facilitate the configuration of a data mirror.