As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Information handling systems often use one or more arrays of physical storage resources, for storing information. Arrays of physical storage resources typically utilize multiple disks to perform input and output operations and can be structured to provide redundancy which may increase fault tolerance (e.g., a Redundant Array of Independent Disks or “RAID”). Other advantages of arrays of storage resources may be increased data integrity, throughput, and/or capacity. In operation, one or more physical storage resources disposed in an array of storage resources may appear to an operating system as a single logical storage unit or “virtual storage resource.” Implementations of storage resource arrays can range from a few storage resources disposed in a server chassis, to hundreds of storage resources disposed in one or more separate storage enclosures. In certain cases, one or more arrays of storage resources may be implemented as a storage area network (SAN). A SAN is in effect an array or collection of physical storage resources communicatively coupled to and accessible via a network (e.g., a host information handling system may access the SAN via a network connection).
From time to time, an administrator or user of an array of storage resources may desire to migrate data from one storage resource to another. For example, as a storage resource ages and becomes obsolete, it may be desired to copy all of the data from the storage resource to a newer storage resource. However, traditional approaches to data migration have numerous disadvantages. For example, FIG. 1 depicts a system 100 employing a traditional approach to data migration. In the approach depicted in FIG. 1, a migration module 104 on host 102 may manage migration of data from storage resource 110a of storage array 108a to storage array 108b. Under this approach, capacity is allocated to the destination storage array 108b (e.g., storage resource 110b is allocated to storage array 108b), and destination storage resource 110b is assigned an identifier (e.g., iSCSI qualified name or Fibre Channel World Wide Name) different than that of the source storage resource 110a. Migration module 104 then reads the data from source storage resource 110a and writes it to destination storage resource 110b such that migrated data follows path 116. During migration, a portion of the data being migrated may be the target of an input-output (I/O) operation (e.g., a read request or write request from host 102). Accordingly, under the approach of FIG. 1, data associated with write requests may be written to both source storage resource 110a and destination storage resource 110b, and the migration module 104 may track which blocks have been written, so as to avoid writing old data over new data during the migration. Data associated with read requests during migration may be read from source storage resource 110a. After all migrated data is copied to destination storage resource 110b, migration module 104 may reconfigure host 102 to map to the destination storage resource 110b, and source storage resource 110a may be deleted.
The approach of FIG. 1 has numerous disadvantages. For example, the approach of FIG. 1 is inefficient because migrated data moves over network 106 twice (first from source storage resource 110a to host 102, then from host 102 to destination storage resource 110b). In addition, the approach of FIG. 1 requires that data associated with write requests be written to both source storage resource 110a and destination storage resource 110b during migration. Furthermore, this approach comes with a high level of management complexity, as destination storage resource 110b is assigned a new identifier, requiring reconfiguration at the host level, network level, and the storage array level.
As another example, FIG. 2 depicts a system 200 employing a traditional approach to data migration. In the approach depicted in FIG. 2, a replication module 214 on storage array 208a may manage migration of data from storage resource 210a of storage array 208a to storage array 208b. Under this approach, capacity is allocated to the destination storage array 208b (e.g., storage resource 210b is allocated to storage array 208b), and destination storage resource 210b is assigned an identifier (e.g., iSCSI qualified name or Fibre Channel World Wide Name) different than that of the source storage resource 210a. Replication module 214 then reads the data from source storage resource 210a and writes it to destination storage resource 210b via network 206 such that migrated data follows path 216. Replication module 214 may use periodic snapshot technology to take periodic point in time snapshots to allow it to maintain a consistent copy of data for migration and allow it to track writes to source storage resource 210a during migration of data to destination storage resource 210b. Accordingly, under the approach of FIG. 2, data associated with write requests may be tracked using the snapshot technology. Initially, data written by host 102 during migration may be written to source storage resource 210a if blocks associated with such data have not been migrated, and may replication module 314 may also track writes using snapshot technology. At a certain point (e.g., if the number of blocks written by host 102 becomes small), replication module 214 may block I/O commands from host 102, complete data migrations, and then reconfigure host 102 to access the new storage resource 110b. 
The approach of FIG. 2 also has numerous disadvantages. For example, the approach of FIG. 2 is inefficient because all write requests must be tracked with snapshots, which may be quite voluminous during times of heavy write activity. In addition, this approach comes with a high level of management complexity, as destination storage resource 210b is assigned a new identifier, requiring reconfiguration at the host level and the storage array level.