Increasingly, large-scale enterprises and co-location hosting facilities rely on the gathering and interpretation of large amounts of information. One approach to meeting information storage and access needs can include a network storage system (NSS). A network storage system may include one or more interconnected computing machines that can store and control access to data. In many conventional approaches, a NSS may include multiple servers that have access to storage media. Such servers may be coupled to one another in a functional manner.
Conventional approaches to a NSS will now be described. Referring now to FIG. 7, a NSS is shown in a block diagram and designated by the general reference character 700. A NSS 700 may include a number of servers 702-0 to 702-n and storage media sets 704-0 to 704-n. In the particular example of FIG. 7, each server (702-0 to 702-n) has a physical connection to a corresponding storage media set (704-0 to 704-n).
Each server (702-0 to 702-n) may run an application that can access data on storage media (704-0 to 704-n). More particularly, each server (702-0 to 702-n) may run an instance of a file server application. Such applications are shown in FIG. 7 as items 706-0 to 706-n. 
FIG. 7 shows an example of servers with this type of configuration. In such a configuration, each storage media set (704-0 to 704-n) may store a predetermined set of files that is accessible by a corresponding server (702-0 to 702-n). Servers (702-0 to 702-n) may receive a request to access a file. How such a request is serviced may depend upon where the file is located with respect to the server. For example, server 702-0 may receive a request to access a file in storage media set 704-0. An application 706-0 may service the request by directly accessing storage media set 704-0. A drawback to this approach can be the inability to scale such storage systems. Dividing access to files among a predetermined number of servers may allow server operation to be optimized for the set of files. However, if one or more servers fail, the data is not accessible. In addition, changes in file and/or access patterns to files can result in a load imbalance, as one server may service more requests (either directly or by way of function shipping) than the other servers. Such a load imbalance may slow the entire system down and can be inefficient in terms of resource use.
Yet another drawback to a conventional share-nothing system 700 can be that the number of servers (702-0 to 702-n) is essentially locked to one value at the start (“n” in the example of FIG. 7). Consequently, increases in the number of servers can be an expensive process as files may be redistributed across media sets and the servers re-optimized for the new file distribution.
Referring now to FIG. 8, a second example of a conventional NSS is shown in a block diagram and designated by the general reference character 800. As in the case of the first example in FIG. 7, a NSS 800 may include a number of servers 802-0 to 802-n and storage media sets 804-0 to 804-n. FIG. 8 shows an example of servers that have a “share everything” configuration. In a share everything configuration, each server (802-0 to 802-n) may have access to all storage media sets (804-0 to 804-n). Thus, as shown in FIG. 8, servers (802-0 to 802-n) may be connected to storage media sets (804-0 to 804-n) by way of a sharing interface 808. A sharing interface 808 may include storage media that can be accessed by multiple servers. For example, multi-ported storage disks, or the like. In addition, or alternatively, a sharing interface 808 may include a software abstraction layer that presents all storage media sets (804-0 to 804-n) as being accessible by all servers (802-0 to 802-n).
A conventional NSS 800 may also present added complexity in relation to scaling and/or load balancing. When scaling a system 800 up or down, as components are added to or removed from a system, a system may have to be restarted to enable each instance of an application 806-0 to 806-n to account for such increased/decreased resources. Such an operation can enable each instance to utilize the new, or newly configured resources of the system.
Yet another drawback to a share everything conventional NSS 800 can arise out of approaches in which a sharing interface includes a software abstraction layer 808. In such cases, particular servers may have a physical connection to particular media sets (804-0 to 804-n). A software abstraction layer 808 and/or application (806-0 to 806-n) may then perform operations similar to the function shipping previously described in conjunction with FIG. 7. Further, in such cases an abstraction layer 808 and/or application (806-0 to 806-n) may have to keep track of which servers have a physical connection with which particular storage media sets (804-0 to 804-n) in order to coordinate such function shipping.
Various ways of implementing server directories in distributed systems, including network storage systems, are known in the art, such as LDAP, NIS, and Domain Name Service. As one example, the Domain Naming Service (DNS) protocol can enable communication between servers by way of a domain name server. As is well understood, a domain name server can access root name servers and master name servers to find the Internet Protocol (IP) address of a host machine. While such an approach can provide an efficient way of mapping a domain name to an IP address, it may not be suitable for scalable storage systems.
As another example, to utilize the advantages of object oriented programming, distributed systems can include naming services. The Common Object Request Broker Architecture (CORBA) Naming Service is one example. As is well understood, in a system that uses the CORBA Naming Service, clients may invoke objects by way of an interface, where such objects may be remote instances.
While DNS and CORBA Naming Service provide naming services for distributed systems and can be incorporated into the present invention, there remains a need for a particular implementation of a server directory that provides naming services to a storage system that may be highly scalable without some or all of the drawbacks to conventional approaches described above.