The rapid adoption of Internet access, coupled with the continuing increase in power of computing hardware, has created numerous new opportunities in network services. Nevertheless, the state of the network is still very similar to that of the internet eight years ago when the web was introduced: heavily based on the client-server model.
In the last eight years, the trends in connectivity and power of PCs have created an impressive collection of PCs connected to the internet, with massive amounts of CPU, disk, and bandwidth resources. However, most of these PCs are never using their full potential. The vast array of machines are still acting only as clients, never as servers, despite the newfound capacity to do so.
The client-server model suffers from numerous problems. Servers are expensive to maintain, requiring money for hardware, bandwidth, and operations costs. Traffic on the Internet is unpredictable: In what is known as the “Slashdot Effect”, content on a site may quickly become popular, flooding the site's servers with requests to the extent that no further client requests can be served. Similarly, centralized sites may suffer from Denial-of-Service (DoS) attacks, malicious traffic which can take down a site by similarly flooding the site's connection to the network. Furthermore, some network services, particularly bandwidth-intensive ones, such as providing video or audio on a large scale, are simply impossible in a centralized model, since the bandwidth demands exceed the capacity of any single site.
Recently, several decentralized peer-to-peer technologies, in particular Freenet and Gnutella, have been created in order to harness the collective power of users' PCs in order to run network services and reduce the cost of serving content. However, Freenet is relatively slow, and it is rather difficult to actually download any content off of Gnutella, since the protocol does not scale beyond a few thousand hosts, so the amount of accessible content on the network is limited.
Several research projects which offer O(logn) time to lookup an object (where n is the number of nodes participating in the network and each time step consists of a peer machine contacting another peer machine) are in various stages of development, including OceanStore at Berkeley and Chord at MIT. However, logn is 10 hops even for a network as small as 1000 hosts, which suggests lookup times in excess of half a minute. Furthermore, these systems are designed for scalability, but not explicitly for reliability, and their reliability is untested under real-world conditions running on unreliable machines.