Some currently available high-end storage appliances are designed as a cluster of servers that are responsible for storage functions. The servers are connected to each other via a low-latency, high-throughput network interconnect (e.g., InfiniBand, 10 Gbps Ethernet). Some or all servers have network adapters for external host connectivity, which may be slower than the cluster interconnect (e.g., Fibre Channel, 1 Gbps Ethernet). These servers are hereinafter referred to as “interface servers”. In addition, some or all servers have local storage facilities (e.g., disk drives, flash) and are responsible for transferring data to and from this local storage. Such servers are hereinafter referred to as “storage servers”. A single server may act both as an interface server and a storage server. A typical Input/Output (I/O) operation is initiated by an external host, arrives at an interface server, and is then routed to a specific storage server via the internal interconnect (possibly using a proprietary protocol). It is common for such system to use a group membership or other cluster communication algorithm to ensure the interface servers know which storage server has the physical medium for a particular I/O request. This cluster software is aimed at ensuring that correct routing is maintained in spite of failures or other reconfigurations of the mapping of data to the storage servers. This means that an I/O operation requires at least two hops, depending on the appliance and type of I/O. One can make a hierarchical classification of servers based on the distance between the host to the physical storage; looking top-down one would have storage servers, interface servers, and external hosts. The cost of each hop in terms of system resources is not negligible—it typically includes at least one data copy, interconnect latencies, and protocol and routing overheads.