The availability of public and private telephony communication has become so universal in the United States and in other developed countries that it has become critical to the functioning of modern society. At the same time, its very popularity and multipurpose use have led to demands that tax the efficient operation of the varied systems that provide that universality. While multiple types of networks are used for providing telephone service it is not always appreciated to what a large extent those different networks are interrelated. Thus, the very efficacy of one of the interrelated systems in optimizing one parameter may cause the overload or breakdown of an interrelated system, which is necessary to provide end to end service. In order that the nature of these problems may be fully appreciated, it is necessary to have an understanding not only of the factors which determine the inherent parameters of the individual networks, but also the factors which impose limitations on the interlinked networks. To that end there is here presented a brief description of the two networks which are most currently involved. These are the Internet and the public switched telephone network (PSTN).
The Internet is an interconnected global computer network of tens of thousands of packet-switched networks using the Internet protocol (IP). It is a network of networks. For purposes of understanding how the Internet works, three basic types of entities can be identified. These are end users, Internet service providers, and backbone providers. End users access and send information either through individual connections or through organizations such as universities and businesses. End users in this context include both those who use the Internet primarily to receive information, and content creators who use the Internet to distribute information to other end users. Internet service providers (ISPs), such as Netcom, PSI, and America Online, connect those end users to Internet backbone networks. Backbone providers, such as MCI, UUNet, and Sprint, route traffic between ISPs, and interconnect with other backbone providers.
This tripartite division highlights the different functionalities involved in providing Internet connectivity. The actual architecture of the Internet is far more complex. Backbone providers typically also serve as ISPs; for example, MCI offers dial-up and dedicated Internet access to end users, but also connects other ISPs to its nationwide backbone. End users such as large businesses may connect directly to backbone networks, or to access points where backbone networks exchange traffic. ISPs and backbone providers typically have multiple points of interconnection, and the inter-relationships between these providers are changing over time. It is important to appreciate that the Internet has no “center,” and that individual transmissions may be routed through multiple different providers based on a number of factors.
End users may access the Internet though several different types of connections, and unlike the voice network, divisions between “local service” providers and “long-distance” providers are not always clear. Most residential and small business users have dial-up connections, which use analog modems to send data over plain old telephone service (POTS) lines of local exchange carriers (LECs) to ISPs. Larger users often have dedicated connections using high-speed ISDN, frame relay or T1 lines, between a local area network at the customer's premises and the Internet. Although the vast majority of Internet access today originates over telephone lines, other types of communications companies, such as cable companies, terrestrial wireless, and satellite providers, are also beginning to enter the Internet access market.
The roots of the current Internet can be traced to ARPANET, a network developed in the late 1960s with funding from the Advanced Research Projects Administration (ARPA) of the United States Department of Defense. ARPANET linked together computers at major universities and defense contractors, allowing researchers at those institutions to exchange data. As ARPANET grew during the 1970s and early 1980s, several similar networks were established, primarily between universities. The TCP/IP protocol was adopted as a standard to allow these networks, comprised of many different types of computers, to interconnect.
In the mid-1980s, the National Science Foundation (NSF) funded the establishment of NSFNET, a TCP/IP network that initially connected six NSF-funded national supercomputing centers at a data rate of 56 kilobits per second (kbps). NSF subsequently awarded a contract to a partnership of Merit (one of the existing research networks), IBM, MCI, and the State of Michigan to upgrade NSFNET to T1 speed (1.544 megabits per second (Mbps)), and to interconnect several additional research networks. The new NSFNET “backbone,” completed in 1988, initially connected thirteen regional networks. Individual sites such as universities could connect to one of these regional networks, which then connected to NSFNET, so that the entire network was linked together in a hierarchical structure. Connections to the federally-subsidized NSFNET were generally free for the regional networks, but the regional networks generally charged smaller networks a flat monthly fee for their connections.
The military portion of ARPANET was integrated into the Defense Data Network in the early 1980s, and the civilian ARPANET was taken out of service in 1990, but by that time NSFNET had supplanted ARPANET as a national backbone for an “Internet” of worldwide interconnected networks. In the late 1980s and early 1990s, NSFNET usage grew dramatically, jumping from 85 million packets in January 1988 to 37 billion packets in September 1993. The capacity of the NSFNET backbone was upgraded to handle this additional demand, eventually reaching T3 (45 Mbps) speed.
In 1992, the NSF announced its intention to phase out federal support for the Internet backbone, and encouraged commercial entities to set up private backbones. Alternative backbones had already begun to develop because NSFNET's “acceptable use” policy, rooted in its academic and military background, ostensibly did not allow for the transport of commercial data. In the 1990s, the Internet has expanded decisively beyond universities and scientific sites to include businesses and individual users connecting through commercial ISPs and consumer online services.
Federal support for the NSFNET backbone ended on Apr. 30, 1995. The NSF has, however, continued to provide funding to facilitate the transition of the Internet to a privately-operated network. The NSF supported the development of three priority Network Access Points (NAPs), in Northern California, Chicago, and New York, at which backbone providers could exchange traffic with each other, as well as a “routing arbiter” to facilitate traffic routing at these NAPs. The NSF funded the vBNS (Very High-Speed Backbone Network Service), a non-commercial research-oriented backbone operating at 155 megabits per second. The NSF provides transitional funding to the regional research and educational networks, as these networks are now required to pay commercial backbone providers rather than receiving free interconnection to NSFNET. Finally, the NSF also remains involved in certain Internet management functions, through activities such as its cooperative agreement with SAIC Network Solutions Inc. to manage aspects of Internet domain name registration.
Since the termination of federal funding for the NSFNET backbone, the Internet has continued to evolve. Many of the largest private backbone providers have negotiated bilateral “peering” arrangements to exchange traffic with each other, in addition to multilateral exchange points such as the NAPs. Several new companies have built nationwide backbones. Despite this increase in capacity, usage has increased even faster, leading to concerns about congestion. The research and education community, with the support of the White House and several federal agencies, recently announced the “Internet II” or “next-generation Internet” initiative to establish a new high-speed Internet backbone dedicated to non-commercial uses.
As of January 1997 there were over sixteen million host computers on the Internet, more than ten times the number of hosts in January 1992. Several studies have produced different estimates of the number of people with Internet access, but the numbers are clearly substantial and growing. A recent Intelliquest study pegged the number of subscribers in the United States at 47 million, and Nielsen Media Research concluded that 50.6 million adults in the United States and Canada accessed the Internet at least once during December 1996—compared to 18.7 million in spring 1996. Although the United States is still home to the largest proportion of Internet users and traffic, more than 175 countries are now connected to the Internet.
According to a study by Hambrecht & Quist, the Internet market exceeded one billion dollars in 1995, and is expected to grow to some 23 billion dollars in the year 2000. This market is comprised of several segments, including network services (such as ISPs); hardware (such as routers, modems, and computers); software (such as server software and other applications); enabling services (such as directory and tracking services); expertise (such as system integrators and business consultants); and content providers (including online entertainment, information, and shopping).
The value of networks to each user increases as additional users are connected. For example, electronic mail is a much more useful service when it can reach fifty million people worldwide than when it can only be used to send messages to a few hundred people on a single company's network. The same logic applies to the voice telephone network.
However, this increasing value also can lead to congestion. Network congestion is an example of the “tragedy of the commons:” each user may find it beneficial to increase his or her usage, but the sum total of all usage may overwhelm the capacity of the network. With the number of users and host computers connected to the Internet roughly doubling each year, and traffic on the Internet increasing at an even greater rate, the potential for congestion is increasing rapidly. The growth of the Internet, and evidence of performance degradation, has led some observers to predict that the network will soon collapse, although thus far the Internet has defied all predictions of its impending doom.
Two types of Internet-related congestion may occur; congestion of the Internet backbones, and congestion of the public switched telephone network when used to access the Internet. These categories are often conflated, and from an end user standpoint the point of congestion matters less than the delays created by the congestion.
Congestion of the Internet backbones results largely from the shared, decentralized nature of the Internet. Because the Internet interconnects thousands of different networks, each of which only controls the traffic passing over its own portion of the network, there is no centralized mechanism to ensure that usage at one point on the network does not create congestion at another point. Because the Internet is a packet-switched network, additional usage, up to a certain point, only adds additional delay for packets to reach their destination, rather than preventing a transmission circuit from being opened. This delay may not cause difficulties for some services such as E-mail, but could be fatal for real-time services such as video conferencing and Internet telephony. At a certain point, moreover, routers may be overwhelmed by congestion, causing localized temporary disruptions known as “brownouts.”
Backbone providers have responded to this congestion by increasing capacity. Most of the largest backbones now operate at 155 Mbps (OC-3) speeds, and MCI has upgraded its backbone to OC-12 (622 Mbps) speed. Backbone providers are also developing pricing structures, technical solutions, and business arrangements to provide more robust and reliable service for applications that require it, and for users willing to pay higher fees.
Internet backbone congestion raises many serious technical, economic, and coordination issues. Higher-bandwidth access to the Internet will be meaningless if backbone networks cannot provide sufficient end-to-end transmission speeds. Moreover, the expansion of bandwidth available to end users will only increase the congestion pressure on the rest of the Internet. This has significant implications to local exchange carriers. Most residential subscribers reach their ISPs through dial-up connections to LEC networks. A modem at the customer premises is connected to a local loop, which is connected to a switch at a LEC central office. ISPs also purchase connections to the LEC network. In most cases, ISPs either buy analog lines under business user tariffs (referred to as “1MBs”) or 23-channel primary rate ISDN (PRI) service. When a call comes into an ISP, it is received through a modem bank or a remote access server, and the data is sent out through routers over the packet-switched Internet. Both subscribers and ISPs share usage of LEC switches with other customers.
It is becoming increasingly apparent that the current flat charge pricing structure for Internet access contributes to the congestion of LEC networks. Switch congestion can arise at three points in LEC networks—the switch at which the ISP connects to the LEC (the terminating switch), the interoffice switching and transport network, and the originating end user switch. The point of greatest congestion is the switch serving the ISP, because many different users call into the ISP simultaneously.
LECs have engineered and sized their networks based on assumptions about voice traffic. In particular, several decades of data collection and research by AT&T, Bellcore, and others has shown that an average voice call lasts 3-5 minutes, and that the distribution between long and short calls follows a well-established curve. Because very few people stay on the line for very long periods of time, there is no need for LEC switches to support all users of the switch being connected simultaneously. Instead, LEC switches are generally divided into “line units” or “line concentrators” with concentration ratios of typically between 4:1 and 8:1. In other words, there are between four and eight users for every call path going through the switch. Call blockage on the voice network tends to be negligible because a significant percentage of users are unlikely to be connected simultaneously.
The distribution of Internet calls differs significantly from voice calls. In particular, Internet users tend to stay on the line substantially longer than voice users.
Because LEC networks have not been designed for these longer usage patterns, heavy Internet usage can result in switches being unable to handle the load (“switch congestion”). Internet connections tie up a end-to-end call path through the PSTN for the duration of the call. When the average hold time of calls through a switch increases significantly, the likelihood of all available call paths through the switch being in simultaneous use also goes up. If a particular line unit has an 8:1 concentration ratio, only one eighth of the subscriber lines into that line unit need to be connected at one time in order to block all further calls.
Because of the relatively short average duration of voice calls, the primary limiting factor on the capacity of current digital switched for voice calls is the computer processing power required to set up additional calls. Computer processing power can be expanded relatively easily and cheaply, because modern switch central processing units are designed as modular systems that can be upgraded with additional memory and processing capacity. However, Internet usage puts pressure not on the call setup capacity of the switch, but on the number of transmission paths that are concurrently open through the switch.
As may be appreciated from the foregoing the traffic problems that exist with respect to providing reliable telephony communications, particularly long distance communications, involves intertwined limitations that exist separately and in combination in the Internet and in the public switched telephone network.