The present invention generally relates to a method for transmitting data over a Synchronous Optical Network (SONET) and, more particularly, to a method for providing robust Asynchronous Transfer Mode (ATM) traffic over a SONET network.
Recent years have witnessed an increase in communications network bandwidth demands. The present day T1 and T3 communications networks are being supplanted by higher throughput networks such as SONET networks. A Synchronous Optical Network (SONET) is a type of communications network capable of transmitting data in the gigabit per second range in some implementations.
The basic building block in a SONET is the synchronous transport signal level-1 (STS-1). The STS-1 is transported as a 51.840 Mb/s serial transmission rate using an optical carrier level-1 (OC-1) optical signal. Higher data rates are transported in a SONET by synchronously multiplexing N lower level modules (such as STS-1s) together to form an STS-N. Each STS-N frame is transmitted in 125 .mu.As so that 8000 frames occur per second. The rate of data transmission over a SONET network may be described as either the rate of electrical transmission (synchronous transport signal level) or the rate of optical transmission (optical carrier level), which are equivalent. Thus, an STS-1 line rate corresponds to an OC-1 line rate.
The STS-1 frame structure has two parts, the transport overhead and the synchronous payload envelope (SPE). The transport overhead occupies the first three columns (bytes) of the 90 column by 9 row STS-1 frame and the remaining 87 bytes form the SPE. The transport overhead of the STS-1 frame is allocated as follows. The data payload to be transported is first mapped into the SPE. This operation is defined by the path layer and is accomplished using path terminating equipment. Associated with the path layer are some additional bytes named the path overhead (POH) bytes, which are also placed in the SPE. After the formation of the SPE, the SPE is placed into the frame along with some additional overhead bytes which are named the line overhead (LOH) bytes. The LOH bytes arc used to provide information for line protection and maintenance purposes. The LOH is created and used by line terminating equipment such as the multiplexers between optical carriers. The next layer is defined as the section layer. It is used to transport the STS-1 frame over a physical medium. Associated with this layer are the section overhead (SOH) bytes. These bytes are used for framing, section error monitoring and section level equipment communications. The physical layer is the final layer and transports bits serially as either optical or electrical entities. There is no overhead at this layer.
Four different size payloads, called virtual tributaries (VT) fit into the SPE of the STS-1. These are: VT1.5, which is 1.728 Mb/s; VT2, which is 2.304 Mb/s, VT3, which is 3.456 Mb/s; and VT6, which is 6.912 Mb/s. Each VT requires a 500 .mu.s structure (four STS-1 frames) for transmission.
An STS-N is formed by byte interleaving the multiple STS-1 signals that comprise the STS-N signal. An STS-N may be thought of an N.times.810 bytes or as an N.times.90 column.times.9 row structure. A concatenated STS (STS-Nc) is a number of STS-1s that are maintained together. Certain services such as asynchronous transfer mode (ATM) payloads may find such STS-Nc structures appealing because the multiples of the STS-1 rate are mapped into an STS-Nc SPE. The STS-Nc is multiplexed, switched, and transported as a single unit.
A SONET network is often implemented as a SONET ring. A SONET ring is a series of communication nodes interconnected by links to form a closed loop, where links are fiber optical cables and the nodes are SONET multiplex equipment with additional ring functions. In general, SONET rings are of three types: Unidirectional Path Switched Ring (UPSR), 2-Fiber Bidirectional Line-Switched Ring (BLSR), and 4-Fiber BLSR. All three architectures provide physical circuit protection for improved transport survivability: self-healing via SONET Path Selection on the UPSR and Automatic Protection Switching (APS) on the BLSRs.
An Add/Drop Multiplexer (ADM) is a SONET multiplexer that allows signals to be added into or dropped from an STS-1. ADMs have two bidirectional ports, commonly referred to as east and west ports. ADMs may be used in SONET Self-Healing Ring (SHR) architectures. A SHR uses a collection of nodes equipped with ADMs in a physical closed loop so that each node is connected to two adjacent nodes in a duplex connection. Any loss of connection due to a single failure of a node or a connection between nodes may be automatically restored in this topology, although the data traffic sourced or sunk (delivered) at the node is lost.
A UPSR normally has working traffic and protection traffic provisioned such that they travel in opposite directions around the ring and do not traverse the same intermediate nodes. Working traffic may also be set up such that both directions of transmission are bidirectional on the ring. UPSRs are defined for 2-fiber rings: one fiber ring carries a working signal (SONET STS/VT path) in one direction, and the second fiber ring carries an identical "protection" signal in the opposing direction. Because UPSRs carry the same traffic in opposing directions on two different fiber rings, they are sometimes referred to as counter-rotating rings. A UPSR implements "self-healing" by using a Path Selector to compare the working and protection signals (SONET paths) that are terminating at the receiving node in order to select which of the two to drop.
Time-division multiplexing (TDM) is the most common technique in use today. TDM time interleaves the supported channels onto the same transmission medium. TDM requires a rigid allocation of the transmission resource in which the available bandwidth is fully used only if all of the channels are active simultaneously. Therefore, TDM is well suited to support communication services with a constant activity rate for the duration of the connection, as in the case of voice services. Other services whose information sources are active only for a small percentage of time, typically data services, tend to waste transmission bandwidth in TDM networks because the bandwidth is allocated according to peak needs.
The asynchronous transfer mode (ATM) technique is intended to avoid wasting bandwidth by sharing transmission and switching resources between several connections without any static bandwidth allocation to individual connections. Therefore, information from the signal connections may be statistically multiplexed onto the same communication resource, thus avoiding resource waste when the source activity level is low. ATM multiplexing requires, however, that each piece of information be accompanied by the routing information, which is no longer given by the position of the information within a frame as in the case of TDM.
ATM is a cell switching and multiplexing standard that allows a single switch and transport network to handle all services such as data, multimedia, and image services with one standard. Logical channels are formed using the cell headers. ATM switches in the network act on the headers to logically route the cells through the network. ATM may support variable rate and constant rate traffic and is scalable to support services of different bandwidths.
An ATM cell includes a 5-byte cell header and a 48-byte payload. The cell header includes the following fields: the Virtual Path Identifier (VPI) the Virtual Channel Identifier (VCI), Payload Type (PT), Cell Loss Priority (CLP) and Header Error Control (HEC). Additionally, ATM requires connections to be established prior to data flow. ATM uses routing tables at each node along the path of a connection that map the connection identifiers from the incoming links to the outgoing links. Two levels of routing hierarchies, Virtual Paths (VPs) and Virtual Channels (VCs) are defined for ATM traffic.
A VP is a collection of one or more VCs traversing multiple nodes. Each VP has a bandwidth associated with it limiting the aggregate bandwidth of VCs that may be multiplexed within that VP. Virtual path identifiers (VPIs) are used to route cells between nodes that originate, remove, or terminate the VPs. Virtual channel identifiers (VCIs) are used at end nodes to distinguish between individual connections. It is noted that there is no difference between a VP and a VC when a VP is defined over a single physical link. When a VP is defined over two or more physical links, it reduces the size of the routing tables by allowing a number of VCs to be switched based on a single identifier, that is, the VPI.
Two distinctive features characterize an ATM network: (1) The user information is transferred through the network in small fixed-size units called ATM cells, each 53 bytes long and (2) it is a connection-oriented network. That is, cells are transferred using preconfigured paths identified by a label carried on the cell header.
ATM switches in an ATM network act on the information in the cell headers to logically route the cells through the network and take care of variable bandwidth to the customer on a VP/VC basis. Thousands of VCs may be carried in a VP and hundreds/thousands of VPs may be carried in a physical link. For example, a standard signal rate for carrying ATM cells is a SONET concatenated STS-3c signal. The ATM layer is processed by ATM switches that make routing decisions based on the Virtual Path Identifier (VPI) and Virtual Channel Identifier (VCI) bits in the cell headers.
Thus, an ATM network represents flexibility with regards to the type of data traffic that can be supported and the communications bandwidth that may be allocated to a specific application. On the other hand, a SONET network is a fast and high-bandwidth network. Thus, combining the speed of a SONET network with the flexibility of ATM traffic may yield a faster, but yet more flexible network. This is highly desirable.
A SONET ATM virtual path ring (VPR) (either UPSR or BLSR) is generally similar to a SONET based ring with the exception that protection is done in the VP level instead of the VT level. The term VP in the context of the SONET ATM VP ring is the available bandwidth for ATM cells to be transported transparently between a pair of ring nodes.
A SONET Unidirectional Path Switched Ring (UPSR) supporting ATM traffic requires protection at the Virtual Path (VP) level. Providing protection at the VP level instead of at the VT level provides two major benefits. First, the VPR VP level protection allows variable bandwidth size instead of the fixed VT sizes. The VP size is limited only by the physical medial transmission rate. Second, the VPR VP level protection allows for a much greater number of connections on the ring. This is because the variable bandwidth VPs can be much smaller than the smallest VTs. The use of VPs also affords additional flexibility because the VTs are defined in fixed increments which can not be changed.
The basic data transfer vehicle of a SONET network is the Synchronous Transport Signal Level-1 (STS-1). An STS-N is formed by byte interleaving the STS-1 signals that comprise the STS-N signal. Each STS-N is partitioned into a transport overhead segment and a synchronous payload envelope (SPE) segment.
VP level protection is necessary to reliably transport ATM traffic within the STS-N envelope. In a synchronous, Time Division Multiplexer (TDM) system such as a SONET UPSR network, a particular connection such as an STS-N or a VT has a single source (ingress) to the UPSR ring and a single destination (egress) from the ring. The VPR feature allows multiple ring nodes to source and receive traffic to and from an STS-N or STS-Nc. A failure on the ring causes the receiving node to receive part of its ATM traffic from the clockwise (CW) direction and part of its traffic in the counter-clockwise (CCW) direction. The STS-N ADM (Add Drop Mux) selects the appropriate STS-N from either the CW or the CCW direction.
While the standard SONET protections are sufficient to protect virtual tributaries (VTs), the standard protection is insufficient to protect VPs. In the TDM world, the VTs within the STS-N are protected based on bridging (traffic is sent on both CCW & CW fibers) the traffic at the ingress and continuously checking the Circular Redundancy Check (CRC) at the egress for both CCW & CW fibers. This is not a feasible implementation for the ATM world because of the nature of the traffic. ATM traffic's bandwidth can vary, and can be bursty in nature. For example, in order to implement a sub-60 ms protection mechanism on a per-VP basis, each VP requires 84.8 kb/s overhead (53 bytes.times.8 bits/byte div by 5 ms). In this example, the massive number of VPs needing to be protected would consume half the bandwidth on an OC-3. In addition, it would require a considerable amount of processing power, both hardware and software, to detect a segment failure. This is not a very cost efficient approach.
Also, once the failure is detected, there is still the need to perform a protection switch quickly. However, a fiber cut can cause thousands of VPs to fail. This would place processing constraints on the network element that exceed commercially available processing resources. A commercially realistic network must be able to both detect a communication failure and perform the UPSR protection switch within a short time, preferably within 60 ms.
Although some initial explorations have begun as to implementing ATM traffic over a SONET network, currently no standards exist for protecting the integrity of ATM traffic on SONET rings. One proposed criteria for implementing ATM traffic in a SONET ring is described in the publication by Bellcore numbered GR-2837-CORE dated Dec. 1, 1994 and titled "ATM Virtual Path Functionality in SONET Rings--Generic Criteria." (hereafter Bellcore criteria). However, the Bellcore criteria provides no method for protecting ATM traffic on a SONET ring.
A need remains for protection for ATM traffic on a SONET network ring. It is an object of the preferred embodiment of the present invention to meet this need.