The present invention is related to mirroring of data contained in a dominant logical unit of a dominant mass-storage device on a remote-mirror logical unit provided by a remote mass-storage device. An embodiment of the present invention, discussed below, involves disk-array mass-storage devices. To facilitate that discussion, a general description of disk drives and disk arrays is first provided.
The most commonly used non-volatile mass-storage device in the computer industry is the magnetic disk drive. In the magnetic disk drive, data is stored in tiny magnetized regions within an iron-oxide coating on the surface of the disk platter. A modern disk drive comprises a number of platters horizontally stacked within an enclosure. The data within a disk drive is hierarchically organized within various logical units of data. The surface of a disk platter is logically divided into tiny, annular tracks nested one within another. FIG. 1A illustrated tracks on the surface of a disk platter. Note that, although only a few tracks are shown in FIG. 1A, such as track 101, an actual disk platter may contain many thousands of tracks. Each track is divided into radial sectors. FIG. 1B illustrates sectors within a single track on the surface of the disk platter. Again, a given disk track on an actual magnetic disk platter may contain many tens or hundreds of sectors. Each sector generally contains a fixed number of bytes. The number of bytes within a sector is generally operating-system dependent, and normally ranges from 512 bytes per sector to 4096 bytes per sector. The data normally retrieved from, and stored to, a hard disk drive is in units of sectors.
The modem disk drive generally contains a number of magnetic disk platters aligned in parallel along a spindle passed through the center of each platter. FIG. 2 illustrates a number of stacked disk platters aligned within a modem magnetic disk drive. In general, both surfaces of each platter are employed for data storage. The magnetic disk drive generally contains a comb-like array with mechanical READ/WRITE heads 201 that can be moved along a radial line from the outer edge of the disk platters toward the spindle of the disk platters. Each discrete position along the radial line defines a set of tracks on both surfaces of each disk platter. The set of tracks within which ganged READ/WRITE heads are positioned at some point along the radial line is referred to as a cylinder. In FIG. 2, the tracks 202-210 beneath the READ/WRITE heads together comprise a cylinder, which is graphically represented in FIG. 2 by the dashed-out lines of a cylinder 212.
FIG. 3 is a block diagram of a standard disk drive. The disk drive 301 receives input/output (“I/O”) requests from remote computers via a communications medium 302 such as a computer bus, fibre channel, or other such electronic communications medium. For many types of storage devices, including the disk drive 301 illustrated in FIG. 3, the vast majority of I/O requests are either READ or WRITE requests. A READ request requests that the storage device return to the requesting remote computer some requested amount of electronic data stored within the storage device. A WRITE request requests that the storage device store electronic data furnished by the remote computer within the storage device. Thus, as a result of a READ operation carried out by the storage device, data is returned via communications medium 302 to a remote computer, and as a result of a WRITE operation, data is received from a remote computer by the storage device via communications medium 302 and stored within the storage device.
The disk drive storage device illustrated in FIG. 3 includes controller hardware and logic 303 including electronic memory, one or more processors or processing circuits, and controller firmware, and also includes a number of disk platters 304 coated with a magnetic medium for storing electronic data. The disk drive contains many other components not shown in FIG. 3, including READ/WRITE heads, a high-speed electronic motor, a drive shaft, and other electronic, mechanical, and electromechanical components. The memory within the disk drive includes a request/reply buffer 305, which stores I/O requests received from remote computers, and an I/O queue 306 that stores internal I/O commands corresponding to the I/O requests stored within the request/reply buffer 305. Communication between remote computers and the disk drive, translation of I/O requests into internal I/O commands, and management of the I/O queue, among other things, are carried out by the disk drive I/O controller as specified by disk drive I/O controller firmware 307. Translation of internal I/O commands into electromechanical disk operations in which data is stored onto, or retrieved from, the disk platters 304 is carried out by the disk drive I/O controller as specified by disk media read/write management firmware 308. Thus, the disk drive I/O control firmware 307 and the disk media read/write management firmware 308, along with the processors and memory that enable execution of the firmware, compose the disk drive controller.
Individual disk drives, such as the disk drive illustrated in FIG. 3, are normally connected to, and used by, a single remote computer, although it has been common to provide dual-ported disk drives for concurrent use by two computers and multi-host-accessible disk drives that can be accessed by numerous remote computers via a communications medium such as a fibre channel. However, the amount of electronic data that can be stored in a single disk drive is limited. In order to provide much larger-capacity electronic data-storage devices that can be efficiently accessed by numerous remote computers, disk manufacturers commonly combine many different individual disk drives, such as the disk drive illustrated in FIG. 3, into a disk array device, increasing both the storage capacity as well as increasing the capacity for parallel I/O request servicing by concurrent operation of the multiple disk drives contained within the disk array.
FIG. 4 is a simple block diagram of a disk array. The disk array 402 includes a number of disk drive devices 403, 404, and 405. In FIG. 4, for simplicity of illustration, only three individual disk drives are shown within the disk array, but disk arrays may contain many tens or hundreds of individual disk drives. A disk array contains a disk array controller 406 and cache memory 407. Generally, data retrieved from disk drives in response to READ requests may be stored within the cache memory 407 so that subsequent requests for the same data can be more quickly satisfied by reading the data from the quickly accessible cache memory rather than from the much slower electromechanical disk drives. Various elaborate mechanisms are employed to maintain, within the cache memory 407, data that has the greatest chance of being subsequently re-requested within a reasonable amount of time. The disk WRITE requests, in cache memory 407, in the event that the data may be subsequently requested via READ requests or in order to defer slower writing of the data to physical storage medium.
Electronic data is stored within a disk array at specific addressable locations. Because a disk array may contain many different individual disk drives, the address space represented by a disk array is immense, generally many thousands of gigabytes. The overall address space is normally partitioned among a number of abstract data storage resources called logical units (“LUNs”). A LUN includes a defined amount of electronic data storage space, mapped to the data storage space of one or more disk drives within the disk array, and may be associated with various logical parameters including access privileges, backup frequencies, and mirror coordination with one or more LUNs. LUNs may also be based on random access memory (“RAM”), mass-storage devices other than hard disks, or combinations of memory, hard disks, and/or other types of mass-storage devices. Remote computers generally access data within a disk array through one of the many abstract LUNs 408-415 provided by the disk array via internal disk drives 403-405 and the disk array controller 406. Thus, a remote computer may specify a particular unit quantity of data, such as a byte, word, or block, using a bus communications media address corresponding to a disk array, a LUN specifier, normally a 64-bit integer, and a 32-bit, 64-bit, or 128-bit data address that specifies a LUN, and a data address within the logical data address partition allocated to the LUN. The disk array controller translates such a data specification into an indication of a particular disk drive within the disk array and a logical data address within the disk drive. A disk drive controller within the disk drive finally translates the logical address to a physical medium address. Normally, electronic data is read and written as one or more blocks of contiguous 32-bit or 64-bit computer words, the exact details of the granularity of access depending on the hardware and firmware capabilities within the disk array and individual disk drives as well as the operating system of the remote computers generating I/O requests and characteristics of the communication medium interconnecting the disk array with the remote computers.
In many computer applications and systems that need to reliably store and retrieve data from a mass-storage device, such as a disk array, a primary data object, such as a file or database, is normally backed up to backup copies of the primary data object on physically discrete mass-storage devices or media so that if, during operation of the application or system, the primary data object becomes corrupted, inaccessible, or is overwritten or deleted, the primary data object can be restored by copying a backup copy of the primary data object from the mass-storage device. Many different techniques and methodologies for maintaining backup copies have been developed. In one well-known technique, a primary data object is mirrored. FIG. 5 illustrates object-level mirroring. In FIG. 5, a primary data object “O3” 501 is stored on LUN A 502. The mirror object, or backup copy, “O3” 503 is stored on LUN B 504. The arrows in FIG. 5, such as arrow 505, indicate I/O write operations directed to various objects stored on a LUN. I/O write operations directed to object “O3” are represented by arrow 506. When object-level mirroring is enabled, the disk array controller providing LUNs A and B automatically generates a second I/O write operation from each I/O write operation 506 directed to LUN A, and directs the second generated I/O write operation via path 507, switch “S1” 508, and path 509 to the mirror object “O3” 503 stored on LUN B 504. In FIG. 5, enablement of mirroring is logically represented by switch “S1” 508 being on. Thus, when object-level mirroring is enabled, any I/O write operation, or any other type of I/O operation that changes the representation of object “O3” 501 on LUN A, is automatically mirrored by the disk array controller to identically change the mirror object “O3” 503. Mirroring can be disabled, represented in FIG. 5 by switch “S1” 508 being in an off position. In that case, changes to the primary data object “O3” 501 are no longer automatically reflected in the mirror object “O3” 503. Thus, at the point that mirroring is disabled, the stored representation, or state, of the primary data object “O3” 501 may diverge from the stored representation, or state, of the mirror object “O3” 503. Once the primary and mirror copies of an object have diverged, the two copies can be brought back to identical representations, or states, by a resync operation represented in FIG. 5 by switch “S2” 510 being in an on position. In the normal mirroring operation, switch “S2” 510 is in the off position. During the resync operation, any I/O operations that occurred after mirroring was disabled are logically issued by the disk array controller to the mirror copy of the object via path 511, switch “S2,” and pass 509. During resync, switch “S1” is in the off position. Once the resync operation is complete, logical switch “S2” is disabled and logical switch “S1” 508 can be turned on in order to reenable mirroring so that subsequent I/O write operations or other I/O operations that change the storage state of primary data object “O3,” are automatically reflected to the mirror object “O3” 503.
FIG. 6 illustrates a dominant LUN coupled to a remote-mirror LUN. In FIG. 6, a number of computers and computer servers 601-608 are interconnected by various communications media 610-612 that are themselves interconnected by additional communications media 613-614. In order to provide fault tolerance and high availability for a large data set stored within a dominant LUN on a disk array 616 coupled to server computer 604, the dominant LUN 616 is mirrored to a remote-mirror LUN provided by a remote disk array 618. The two disk arrays are separately interconnected by a dedicated communications medium 620. Note that the disk arrays may be linked to server computers, as with disk arrays 616 and 618, or may be directly linked to communications medium 610. The dominant LUN 616 is the target for READ, WRITE, and other disk requests. All WRITE requests directed to the dominant LUN 616 are transmitted by the dominant LUN 616 to the remote-mirror LUN 618, so that the remote-mirror LUN faithfully mirrors the data stored within the dominant LUN. If the dominant LUN fails, the requests that would have been directed to the dominant LUN can be redirected to the mirror LUN without a perceptible interruption in request servicing. When operation of the dominant LUN 616 is restored, the dominant LUN 616 may become the remote-mirror LUN for the previous remote-mirror LUN 618, which becomes the new dominant LUN, and may be resynchronized to become a faithful copy of the new dominant LUN 618. Alternatively, the restored dominant LUN 616 may be brought up to the same data state as the remote-mirror LUN 618 via data copies from the remote-mirror LUN and then resume operating as the dominant LUN. Various types of dominant-LUN/remote-mirror-LUN pairs have been devised. Some operate entirely synchronously, while others allow for asynchronous operation and reasonably slight discrepancies between the data states of the dominant LUN and mirror LUN.
FIG. 7 schematically illustrates normal operation of a dominant LUN/remote-mirror-LUN pair. Data access requests, including READ and WRITE requests, such as data access request 701, are received from a communications medium by the controller 703 of the disk array 702 that provides the dominant LUN, or dominant disk array. The received data-access requests are routed by the controller 703 to a memory-based input queue 704 within the memory component 705 of the dominant disk array 702. The controller 703 also routes WRITE requests, such as WRITE request 706, through one or more dedicated communications links to the remote disk array 707 that provides the remote-mirror LUN. The controller 708 of the remote disk array routes the received WRITE requests to an input queue 709 within the memory component 710, from which the WRITE requests are later dequeued and written to appropriate disk drives within the remote disk array. Similarly, within the dominant LUN, disk-access requests are subsequently dequeued from the memory queue 704 and internally transmitted to the appropriate disk drives. In a synchronous dominant LUN/remote-mirror-LUN pair, a next WRITE operation is not dequeued and carried out by the dominant LUN until an acknowledgement is received from the remote-mirror LUN for the previous WRITE request forwarded to the remote-mirror LUN. Thus, the dominant LUN and remote-mirror LUN are kept in nearly identical data states. In an asynchronous dominant LUN/remote-mirror-LUN pair, WRITE requests may be carried out on the dominant LUN well before execution of the WRITE requests on the remote-mirror LUN. However, various techniques are employed in asynchronous dominant LUN/remote-mirror-LUN pairs to manage the data state disparity in order to ensure data integrity. Note that other types of requests and commands besides WRITE requests may alter the data state of a mass-storage device, and that, in this discussion, a WRITE request refers to any command or request that can alter the data state of a mass-storage device.
Occasionally, the dedicated communications link may fail. In certain cases, communication between the dominant LUN and the remote-mirror LUN may be redirected through alternate communications paths. However, in other cases, the dominant LUN may become isolated from the remote-mirror LUN, and WRITE operations received by the dominant LUN may not be transmitted by the dominant LUN to the remote-mirror LUN. FIG. 8 illustrates a communications failure within a dominant LUN/remote-mirror-LUN pair. In FIG. 8, the communications link 802 between the dominant disk array 702 and the remote disk array 707 has been broken, and the memory queue 709 within the memory component 710 of the remote disk array 707 is empty. Because WRITE requests cannot be forwarded from the dominant LUN to the remote-mirror LUN, the dominant LUN instead begins to accumulate the unforwarded WRITE requests in a buffer 804 within the memory component 705 of the dominant disk array 702. When the communications link between the dominant LUN and the remote-mirror LUN is restored, the dominant LUN can then forward the stored WRITE requests to the remote-mirror LUN in order to resynchronize the data states of the dominant LUN and remote-mirror LUN.
Two different types of WRITE-request buffers are generally employed to temporarily buffer WRITE requests during a communications failure. FIGS. 9A and 9B illustrate these two different types of WRITE-request buffers. Initially, the dominant LUN may store the unforwarded WRITE requests in a time-ordered WRITE-request buffer, illustrated in FIG. 9A. In the time-ordered WRITE-request buffer 902, unforwarded WRITE requests are stored in the order in which they are received and queued to the memory queue 704 in the memory component 705 of the dominant LUN. Alternatively, if WRITE requests are sequenced, by containing sequence numbers or other sequencing mechanisms, the WRITE requests may be stored in the time-ordered WRITE-request buffer 902 in their sequence order. Upon restoration of communication between the dominant LUN and the remote-mirror LUN, the WRITE requests can be extracted in first-in-first-out order (“FIFO”) for transmission to the remote-mirror LUN.
Unfortunately, the amount of data that can be stored within the time-ordered WRITE-request buffer 902 is limited, and each WRITE request, such as WRITE request 904, must be stored in its entirety. The advantage of using a time-ordered WRITE-request buffer is that, upon resumption of communications between the dominant LUN and the remote-mirror LUN, the WRITE request can be straightforwardly extracted from the time-ordered WRITE-request buffer and forwarded to the remote-mirror LUN without risking corruption of the data stored on the remote-mirror LUN. In other words, WRITE-request forwarding from the dominant LUN to the remote-mirror LUN is delayed, but the sequence of forwarded WRITE requests is maintained.
If the communications link is not quickly restored, a time-ordered WRITE-request buffer is generally quickly exhausted, and the dominant LUN then commonly transitions to employing a WRITE-request bit-map buffer, as shown in FIG. 9B. In a WRITE-request bit-map buffer, each of a particular type of data storage units within the dominant LUN is represented within the WRITE-request bit-map buffer as a single bit. When the bit is set, the bit map indicates that that data storage unit has been written since communications between the dominant LUN and the remote-mirror LUN was last interrupted. A set bit may be called a “dirty bit” to flag dirty data storage units. Generally, either tracks or cylinders are employed as the logical data storage unit to represent with a single bit within the bit map, in order to keep the bit map reasonably sized while, at the same time, maintaining sufficient granularity so that copying only dirty data storage units to the remote-mirror LUN represents a distinct savings with respect to copying all data storage units to the remote-mirror LUN.
The WRITE-request bit-map buffer 906 is far more compact than a time-ordered WRITE-request buffer. Rather than storing the entire WRITE-request, including the data to be written, the WRITE-request bit-map buffer needs to maintain only a single bit for each track or cylinder to indicate whether or not the track or cylinder has been written since the communications link was broken. Unfortunately, the WRITE-request bit-map buffer does not maintain any WRITE-request sequence information. Thus, when the communications link is restored, and the data state of the remote-mirror LUN needs to be resynchronized with that of the dominant LUN, the remote-mirror LUN is generally in a potentially corrupt data state while tracks or cylinders indicated in the bit map are transferred from the dominant LUN to the remote-mirror LUN. This potentially corrupted data state arises because the sequence of WRITE operations received by the dominant LUN has not been preserved in the WRITE-request bit map.
FIGS. 10A-E illustrate an example of a detrimental out-of-order WRITE request applied to a mass-storage device. The example of FIGS. 10A-E involves a simple linked list. FIG. 10A is an abstract illustration of a general, linked-list data structure. The data structure comprises three nodes, or data blocks 1001-1003. A linked list may contain zero or more data blocks, up to some maximum number of data blocks that can be stored in the memory of a particular computer. Generally, a separate pointer 1004 contains the address of the first node of the linked list. In FIGS. 10A-E, a pointer, or address, is represented by an arrow, such as arrow 1005, pointing to the node to which the address refers, and emanating from a memory location, such as memory location 1006, in which the pointer is stored. Each node of the linked list includes a pointer and other data stored within the node. For example, node 1001 includes pointer 1007 that references node 1002 as well as additional space 1008 that may contain various amounts of data represented in various different formats. Linked lists are commonly employed to maintain, in memory, ordered sets of data records that may grow and contract dynamically during execution of a program. Linked lists are also employed to represent ordered records within the data stored on a mass-storage device. Note that the final node 1003 in the linked list of FIG. 10A includes a null pointer 1009, indicating that this node is the final node in the linked list.
FIGS. 10B-E abstractly represent data blocks, stored on a mass-storage device, that contain a linked list of data blocks. Each data-block node, such as data-block node 1010, includes a pointer, such as pointer 1012, and some amount of stored data, such as stored data 1014. The list of data blocks in FIG. 10B starts with node 1010, next includes node 1016, then node 1018, and, finally, node 1020. Each data block can be written or overwritten, in a single mass-storage-device access. Data blocks 1022 and 1024 in FIG. 10B are unused.
Consider the addition of a new node, or data block, to the end of the linked list. The two WRITE operations required to add a data block to the end of the list are illustrated in FIGS. 10C-D. First, the new node is written to data block 1024, as shown in FIG. 10C. Then, node 1020 is overwritten in order to change the pointer within node 1020 to reference the newly added node 1024. When these operations are performed in the sequence shown in FIGS. 10C-D, the linked list is consistent at each point in the two-WRITE-request operation. For example, in FIG. 10C, the new node has been added, but is not yet a member of the linked list. If the second operation, illustrated in FIG. 10D, fails, the linked list remains intact, with the only deleterious effect being an overwritten, and possibly wasted, data block 1024. In the second operation, illustrated in FIG. 10D, the pointer within data node 1020 is updated to point to already resident node 1024, leaving the linked list intact and consistent, and having a new node.
Consider, by contrast, the state of the linked list should the second WRITE operation, illustrated in FIG. 10D, occur prior to the first WRITE operation, illustrated in FIG. 10C. In this case, illustrated in FIG. 10E, the pointer within node 1020 references data block 1024. However, data block 1024 has not been written, and is therefore not formatted to contain a pointer having a null value. If, at this point, the second WRITE operation fails, the linked list is corrupted. A software routine traversing the nodes of the linked list cannot determine where the list ends. Moreover, the software routine will generally interpret any data found in data block 1024 as the contents of the fifth node of the linked list, possibly leading to further data corruption. Thus, the order of WRITE operations for adding a node to a linked list stored on a mass-storage device is critical in the case that all WRITE operations are not successfully carried out. When WRITE requests are extracted from the time-ordered WRITE-request buffer shown in FIG. 9A and issued to a remote-mirror, the remote-mirror will remain in a data-consistent state throughout the period time during which the buffered WRITE requests are carried out, providing that the order in which the WRITE request received by the dominant LUN is consistent. However, when tracks or cylinders flagged in the WRITE-request bit-map buffer of FIG. 9B are retrieved and sent in an arbitrary order to the remote-mirror LUN, the data state of the remote-mirror LUN may be quite inconsistent, and potentially corrupted, until all tracks or cylinders flagged within the WRITE-request bit-map buffer are successfully transferred to the remote LUN and stored within mass-storage devices of the remote LUN. The corruption illustrated in FIGS. 10A-E is rather straightforward and simple. The potential corruption within hundreds of gigabytes of data stored within a mass-storage-device LUN and incompletely transferred, out-of-order, to a remote LUN is staggering. Literally hundreds of thousands of complex data interrelationships may be irreparably broken.
Unfortunately, currently available mass-storage devices, such as disk arrays, have only a limited capacity to buffer WRITE requests within a time-ordered WRITE-request buffer during communication failure between a dominant LUN and a remote-mirror LUN. In general, even large electronic memory components can buffer only a few tens of gigabytes of data. When a communication failure persists for more than a short period of time, the dominant LUN needs to switch to a WRITE-request bit-map buffer in order to be able to keep track of those tracks of cylinders overwritten during the communications failure. This, in turn, opens a relatively wide window for unrecoverable failures in the dominant LUN/remote-mirror LUN system. If the communications link again fails during the remote-mirror LUN resynchronization operation, when overwritten tracks and cylinders are retrieved and transmitted to the remote-mirror LUN from the dominant LUN, the remote-mirror LUN is almost certainly left in a seriously inconsistent and unrecoverable state. When communications is again restored, the entire WRITE-request bit-map buffer flush operation must be restarted from the beginning. More seriously, if the second communication failure is accompanied by a failure of the dominant LUN, there remains no uncorrupted data within the dominant LUN/remote-mirror LUN pair accessible to remote users. If, in the worst case, the data stored within the dominant LUN is corrupted or lost due to the dominant-LUN failure, then it is impossible to reconstruct an uncorrupted, consistent data set. Thus, a very real possibility for complete loss of data from both the dominant LUN and the remote-mirror LUN currently exists in commonly available mass-storage mirroring systems.
Designers, manufacturers, and users of highly available, dominant LUN/remote-mirror LUN mass-storage-device pairs have therefore recognized the need for a more reliable method for deferring execution of WRITE requests on the remote-mirror LUN during communications failure between the dominant LUN and the mirror LUN and a more reliable method for resynchronizing the data state of the mirror LUN with that the dominant LUN following restoration of communications between the dominant LUN and mirror LUN.