A major telecommunications service provider may operate one or more very extensive communication networks serving a large number of geographically diverse sites, such as cities or metropolitan areas across a nation or across the globe. The customers of such services often include medium to large business enterprises who have a need to communicate data, voice, video and other forms of traffic. Communications services may span between different sites belonging to a single business enterprise or may include communications between separate business enterprises in support of a business-to-business relationship. A communication service may also connect customers to public networks, such as the public telephone network or to the Internet.
Large communications networks often employ a hierarchical arrangement by which to aggregate, transport and distribute traffic. The portion of the network that carries the highest level of aggregated traffic is often referred to as being the service provider's “core” or “backbone” network. The core network provides long-distance transport of communications among the vast number of endpoints served by the network and may provide a very high volume of communications, particularly between areas where traffic is highly concentrated.
Traditionally, it has been more practical for a core network service provider to operate a few strategically-placed facilities to serve a large number of customers in a given region rather than to extend the service provider's core network to every physical location where customers may reside.
Consequently, although a core network serves a large number of end users, most customer endpoints are not coupled directly to the core network but instead connect through intervening facilities, such as an access link or an access network, to reach a point where the core network is accessible, referred to as a “service edge.” Customer sites in the vicinity of a service provider's edge, or an intermediate hub that provides connection to the service edge, must be connected to the service edge via some form of access link or access network. An access network extends the geographical coverage of the communications service and may also aggregate communications traffic from many customer locations.
Establishing an access link may involve installing a coaxial cable or fiber-optic cable between a customer site and service edge or local hub. Often, however, the existing facilities of a local telephone carrier are leased to provide this connectivity. This usually provides at least a twisted-pair subscriber loop connection to virtually every potential customer location in a metropolitan area. In the case of larger business locations and multi-tenant commercial sites, local telephone facilities typically comprise a large quantity of telephone wires or even provide for wideband access.
For reference, FIG. 1 provides a network diagram 10 depicting a typical prior art access network arrangement by which customer equipment located in one or more buildings 20a-20c may be coupled to various service edge components 65. The service edge components may collectively be viewed as being a service edge representing a variety of core networks denoted by service networks 80, 82 and 84. Components 65 may comprise routers, switches, gateways or the like. In some implementations, or with respect to certain customer connections, there may be a greater or lesser number of aggregation stages or transport hops between customer premise equipment (CPE) and a given service edge.
Service networks 80, 82 and 84 may be any variety or combination of Internet Protocol (IP) networks, ATM/FR networks, TDM networks or the like. Although a single access network arrangement is shown for illustration in FIG. 1, each service network will generally comprise a number of service edges coupled to access networks in a similar arrangement to reach a vast number of customer sites on a national or global scale.
The services required by customers may vary greatly in the type of access services, bandwidth, quality of service (QoS), type of legacy equipment and other aspects. Types of services typically include frame relay services, asynchronous transfer mode (ATM) services, broadband services, point-to-point private line services, voice services, and the like. Accordingly, the access service provider must be prepared to provide many types of services to customers of various sizes and needs.
Furthermore, an access network must be capable of meeting the customers' current and future needs in terms of bandwidth, QoS, and the like. For example, a given customer may start with relatively small bandwidth needs yet expand to needing high-bandwidth connections such as a SONET OC-3 or OC-12 connection. Additionally, customers may require ATM services and frame relay services to support legacy systems while concurrently implementing newer applications and communications that are based on IP and Ethernet protocols.
According to the needs of various customers, the type and bandwidth of traffic from each building 20a, 20b and 20c may vary. For example, building 20a is shown to comprise an ethernet CPE 21, which may actually represent a local area network involving routers, switches, bridges, servers and workstations of a customers network. Ethernet CPE 21 may be coupled to a service edge for the purposes of providing virtual private LAN service, Internet connectivity, voice-over-IP communications, and the like. Building 21 also depicts a frame relay CPE 24 representing a need to provide for frame relay connectivity to a service edge. This is shown to be accomplished through an M13 multiplexer in a typical arrangement wherein a few DS0 TDM channels may be used in parallel to support frame relay traffic. The multiplexer serves to groom these DS0 channels as part of, for example, a larger DS1 or DS3 TDM signal. Building 21 is shown to be coupled through an optical link or optical ring 30 to a metro hub 50. Add/drop multiplexers (ADM) 22, 51 are employed along the optical ring 30 to insert and drop traffic along the ring so that building 20a and metro node 50 are coupled through one or more optical communication signals. Both the ethernet CPE 21 and frame relay CPE 24 are shown to provide traffic through ADM 22. In practice, a particular model of ADM 22 may be ill-suited for carrying both ethernet and TDM traffic, so it is often the case that multiple optical rings or links 30 must be provided to a building to accommodate mixtures of traffic types, even if each of the individual optical connections is only partially utilized.
Metro node 50 and Metro/LD hub 60 represent levels of aggregation of traffic that might occur as traffic enters from, or is distributed to, a large number of customer sites over a geographic region in connection with a given service edge. For example, a metro node 50 may serve a number of buildings in a metropolitan area, whereas a number of such metro nodes 50 may, in turn, “feed into” a Metro/LD hub 60. The aggregate flows handed to the service edge at Metro/LD hub 60 may represent traffic pertaining to tens of thousands of users or customers scattered among hundreds of buildings in one or more metropolitan areas.
The coupling between building 20a and metro node 50 represents what may be termed an “on network” situation, where in a direct optical or electrical communications link is available to reach the building. In some cases, it is practical or cost effective for a core network service provider to install and operate such connectivity to a customer location. More often, however, the existing facilities of a telephone local exchange carrier are leased to provide his connectivity in what is referred to as an “off network” arrangement, as depicted by the coupling of buildings 20b and 20c through LEC 40. The connection 41 through a LEC is often a T1 (1.544 Mbps) or a DS3 (45 Mbps) TDM communications link.
Obtaining a T1 communications link is costly and time-consuming. There are usually delays in establishing the connection because it involves some manual installing of equipment and patching of cables at various points in the local access network. From a core network service provider's perspective, leasing an access line from another party can be one of the most expensive aspects of providing a communication service. Each T1 line that must be leased represents an initial cost for installation and an ongoing cost for leasing the line on a continuous basis. These costs are particularly high considering that this increment of bandwidth of around 1.5 Mbps is relatively small by today's standards, especially now that 100 megabit-per-second LANs are commonplace even in the home.
Furthermore, as the needs of a customer or site expand over time, it may be necessary to increase available bandwidth in the connection to the provider edge. A given customer may initiate service with relatively small bandwidth needs yet, in a very short period, the needs may expand to necessitate high-bandwidth connections such as a SONET OC3 or OC12 connection. In accordance with practices common in the prior art, an increase in bandwidth may be achieved by ordering another T1 or DS3 facility which typically involves physical manipulation of cables and equipment at various sites, often delaying the implementation of additional bandwidth by several days. This is also disadvantageous in that, even if only a small incremental additional bandwidth is required, an entire T1 or DS3 must be established and maintained. This involves substantial capital expense and ongoing operating expense for a facility that will be underutilized. The increments by which bandwidth may be upgraded are, in some sense, very coarse.
At metro node 50, signals from various buildings physically converge at a distribution frame, such as the manual DS3/fiber distribution frame 52, where access paths may be patched together. A wideband crossconnect switch 53 and a narrowband crossconnect 54 are provided in metro node 50 for the purposes of grooming and manipulating traffic, such as DS0s carrying frame-relay traffic mentioned earlier. Distribution frame 52 represents at least one point at which a considerable amount of manual effort is required to patch cables in order to provision connections between customers and a service edge. Moreover, crossconnects 53 and 54 are indicative of extensive measures to deal with the channelized nature of the TDM communications which are typically used for access networks, as will be described shortly.
To continue in the direction of progressively greater aggregation, optical connection 59 represents an optical link or an optical ring, perhaps shared by one or more metro nodes 50 as a conduit for aggregate flows to be communicated with one or more Metro/LD hubs 60. At Metro/LD hub 60 traffic is selectively redirected to service edge equipment corresponding to the appropriate service network 80, 82 and 84. At Metro/LD hub 60, various ADM's 55 and 66 are used to couple traffic to and from optical connection 59. Once the traffic that was optically multiplexed upon optical connection 59 has been extracted and separated at hub 60, each signal enters a manual DS3/fiber distribution frame 62 which is coupled to wideband and narrowband crossconnect switches 63 and 64, respectively. As in the case of the metro node 50, these components represent further manual provisioning and the use of cumbersome techniques and expensive equipment to deal with deeply channelized TDM communications being adapted for carrying data.
As illustrated in FIG. 1, prior art provisioning often involves a great deal of manual cable patching at various sites, along with configuring a variety of equipment, including the various ADMs, cross connects, switches, etc. In a typical scenario, it is not unusual for a path between a customer site and a service edge to comprise more than 20 “touch points,” that is, places where a cable must be manually plugged in or equipment must be manually configured in some way. Dispatching personnel to perform these actions is costly, time-consuming and error prone.
Another noteworthy inefficiency imposed by using TDM connections to reach a service provider edge relates to the concept of “deep channelization.” For example, a DS3 signal carries 28 DS1 channels and, in turn, each DS1 carries 24 DS0s. Carrying a customer traffic flow that occupies one or a few DS0s requires multiplexing equipment and low level crossconnects to route the traffic independently of the other flows that may occupy the remainder of the DS0 and DS1 channels in the composite DS3 signal.
It is common practice to provide ATM services to a customer by using four DS0 TDM circuits in an inverse multiplexing arrangement. This means that, in addition to transferring ATM traffic to TDM traffic using special equipment at the customer end, the separate DS1 circuits must each be managed, provisioned and groomed in the service provider's network to reach their proper common destination. This handling requires expensive narrowband/wideband crossconnect switches and multiplexing equipment. These complicated manipulations are a consequence of fitting ATM onto TDM-oriented access network signals.
Yet another inefficiency of using TDM channels for data access communications relates to the bursty nature of many types of data communications (file transfers, Internet sessions, etc.) By design, TDM circuits are well-suited for handling inherently constant bit rate communications. But when carrying data traffic, channels which are momentarily idle cannot lend bandwidth to better accommodate traffic within other channels which are heavily burdened. Instead, channels must be provisioned for constant or maximum anticipated data rates and thereafter occupy reserved bandwidth even when not actually being used to carry traffic.
Thus, the traditional style of fulfilling a variety of access needs using traditional TDM links imposes many undesirable constraints hindering efficient and effective service to customers. The alternative of accommodating each type of access communication protocol over separate, dedicated access facilities also increases the costs and management burden on a service provider. Thus, a primary concern for network providers is simplifying the handling of many diverse traffic flows and improving the overall efficiency and scalability of the access network.