A storage server is a computer that provides access to information that is stored on one or more storage devices connected to the storage server, such as disk drives (“disks”), flash memories, or storage arrays. The storage server includes an operating system that may implement a storage abstraction layer to logically organize the information as storage objects on the storage devices. With certain logical organizations, the storage abstraction layer may involve a file system which organizes information as a hierarchical structure of directories and files. Each file may be implemented as set of data structures, e.g., disk blocks, configured to store information, such as the actual data for the file. The file system typically organizes such data blocks as a logical “volume,” with one or more volumes further organized as a logical “aggregate” for efficiently managing multiple volumes as a group. In a file system, each directory, file, volume, and aggregate may constitute a storage object. In other logical organizations, a file system may constitute a storage object with the storage abstraction layer managing multiple file systems.
A storage server may be configured to operate according to a client/server model of information delivery to allow one or more clients access to data in storage objects stored on the storage server. In this model, the client may comprise an application executing on a computer that “connects” to the storage server over a computer network, such as a point-to-point link, shared local area network, wide area network or virtual private network implemented over a public network, such as the Internet. A client may access the storage devices by submitting access requests to the storage server, for example, a “write” request to store client data included in a request to storage devices or a “read” request to retrieve client data stored in the storage devices.
Multiple storage servers may be networked or otherwise connected together as a storage system to distribute the processing load of the system across multiple storage servers. Processing load involves the load on a storage server to service storage requests from clients directed to a storage object (e.g., aggregate) of the storage server. In certain cases, however, one of the storage servers may be more heavily loaded than another storage server in the system. Thus, it may be desirable to offload client requests for an aggregate from one storage server (source) to another (destination). In other instances, a source may undergo routine maintenance processing or upgrades, so it may also be desirable for a destination to carry out requests on the aggregate to ensure continued access to client data during those periods. In these cases, “ownership” (servicing) of an aggregate by a storage server may be changed by migrating the aggregate between storage servers.
One known technique for migrating aggregates involves copying data of an aggregate from the source to the destination. However, copy operations may result in increased load on both the source and destination during migration since each must still continue to perform normal processing tasks such as servicing other aggregates. Additionally, copy operations are not instantaneous and, depending on the size of the aggregate and the physical distance between storage servers, a lengthy delay in accessing an aggregate may be experienced by a client. Conventional techniques using copy operations to migrate aggregates thus tie up system resources such as network bandwidth and may cause increased delays in accessing client data.
To avoid unwieldy copy operations, another known technique referred to as “zero-copy migration” may be performed between storage servers configured in a distributed architecture. Here, storage servers are implemented as “nodes” in the storage system, where each node accesses a shared pool of storage containing the aggregates of the system. Although multiple nodes have physical access to an aggregate in the shared storage pool, only one of the nodes owns the aggregate at any one time. In the event a migration operation is desirable, a zero-copy migration operation may be performed by passing ownership of the aggregate to another node without copying data between physically remote locations. The passing of ownership may, for instance, be carried out by known storage protocols operating between the nodes to relinquish or gain control of the aggregate in shared storage.
In order to enable zero-copy migration, however, a storage administrator must manually configure each of the nodes in the system to facilitate ownership changes to the aggregate. This involves a non-trivial task of configuring the physical components such as the network interface controllers of the nodes to enable the hand-off process between the nodes. In certain cases, this may require unwieldy manual effort on the part of the administrator, as well as specialized knowledge and/or skills, in performing the task. Additionally, information related to aggregates owned by a particular node must also be maintained by the client in order to gain network access to the aggregate. To that end, node and aggregate information must further be managed by the clients upon migration so client requests may be directed to the appropriate node.
The conventional zero-copy migration technique is further deficient if the data storage needs of the administrator change. For instance, the administrator may desire to enhance the capability of the cluster to provide additional storage capacity and/or processing capabilities as storage needs grow. As such, a storage system which readily scales to such changing needs would be preferable under these circumstances. However, using conventional techniques, at least one other node in the system must be reconfigured by the administrator to extend the zero-copy migration functionality to a new node added to the system. Thus, while known techniques for zero-copy operations do avoid tying up network resources and lengthy data access delays, other deficiencies still exist with known techniques for zero-copy migration of aggregates between storage servers.