Data communication networks may include various computers, servers, nodes, routers, switches, bridges, hubs, proxies, and other network devices coupled to and configured to pass data to one another. These devices will be referred to herein as “network elements.” Data is communicated through the data communication network by passing protocol data units, such as Internet Protocol packets, Ethernet frames, data cells, segments, or other logical associations of bits/bytes of data, between the network elements by utilizing one or more communication links between the network elements. A particular protocol data unit may be handled by multiple network elements and cross multiple communication links as it travels between its source and its destination over the network.
The various network elements on the communication network communicate with each other using predefined sets of rules, referred to herein as protocols. Different protocols are used to govern different aspects of the communication, such as how signals should be formed for transmission between network elements, various aspects of what the protocol data units should look like, how protocol data units should be handled or routed through the network by the network elements, and how information associated with routing information should be exchanged between the network elements.
Ethernet is a well known networking protocol that has been defined by the Institute of Electrical and Electronics Engineers (IEEE) as standard 802.1 In Ethernet network architectures, devices connected to the network compete for the ability to use shared telecommunications paths at any given time. Where multiple bridges or nodes are used to interconnect network segments, multiple potential paths to the same destination often exist. The benefit of this architecture is that it provides path redundancy between bridges and permits capacity to be added to the network in the form of additional links. However to prevent loops from being formed, a spanning tree was generally used to restrict the manner in which traffic was broadcast on the network. Since routes were learned by broadcasting a frame and waiting for a response, and since both the request and response would follow the spanning tree, all of the traffic would follow the links that were part of the spanning tree. This often led to over-utilization of the links that were on the spanning tree and non-utilization of the links that weren't part of the spanning tree.
To overcome some of the limitations inherent in Ethernet networks, a link state protocol controlled Ethernet network was disclosed in application Ser. No. 11/537,775, filed Oct. 2, 2006, entitled “Provider Link State Bridging,” the content of which is hereby incorporated herein by reference.
As described in greater detail in that application, rather than utilizing a learned network view at each node by using the Spanning Tree Protocol (STP) algorithm combined with transparent bridging, in a link state protocol controlled Ethernet network the bridges forming the mesh network exchange link state advertisements to enable each node to have a synchronized view of the network topology. This is achieved via the well understood mechanism of a link state routing system. The bridges in the network have a synchronized view of the network topology, have knowledge of the requisite unicast and multicast connectivity, can compute a shortest path connectivity between any pair of bridges in the network, and individually can populate their Forwarding Information Bases (FIBs) according to the computed view of the network. Two examples of link state routing protocols include Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS), although other link state routing protocols may be used as well. IS-IS is described, for example, in ISO 10589, and IETF RFC 1195, the content of each of which is hereby incorporated herein by reference. To prevent loops from forwarding, a reverse path forwarding check is performed to determine if a frame has been received on an expected port. If not, the frame is considered to be likely to have arrived as a result of unsynchronized/unconverged multicast forwarding and is dropped.
Link state protocols utilize the control plane to perform fault propagation. This is achieved by the flooding of advertisements of changes to the network state. This is normally performed exclusively as a control plane function and is hop by hop. Each node receiving a previously unseen notification re-floods it on all other interfaces, but a node receiving a notification of which it has prior knowledge simply discards the information as redundant. This will result in reliable synchronization of the routing databases in all the nodes in the network, but the overall amount of time to synchronize the routing databases across the network can become significant in proportion to desired recovery times. This is particularly true for sparsely connected topologies where there are chains of “two-connected nodes” with multi-homed edges. Ring topologies are a specific and commonly employed example.
An example ring topology is shown in FIG. 1. In FIG. 1, the ring 10 includes nodes A-E 12, which are interconnected by links 14. In the example shown in FIG. 1, each node has a data plane to handle transmission of data on the network (represented by the square block) and a control plane 12′ (represented by the triangle block). The control plane is used to allow the network elements to exchange routing information and other control information, and is used by the network element to control how the data plane handles the data on the network.
When a failure occurs on the ring (indicated by the X in FIG. 1), the failure will be detected by the nodes adjacent the failure. The nodes adjacent the failure (nodes A and E in FIG. 1) will each generate a notification which will propagate in both directions around the ring. After the failure notification has propagated around the ring, the nodes will go through a hold-off period, and then begin calculating new paths through the network based on the new network topology. Once this has occurred the network will converge based on the new network topology and traffic will then start to follow the new paths through the network.
Route advertisements such as failure notifications are processed by the control plane 12′ at each hop around the ring before being forwarded to other nodes in the network, which slows down propagation of the failure notification, impacting the overall network convergence times. Specifically, since each node is required to process the failure notification at the control plane before forwarding the failure notification to the next node, in order to determine whether the notification is new or a duplicate to be discarded, the rate of propagation of the failure notification is dependent on the speed with which the nodes are able to process the failure notification in the control plane. For example, as shown in FIG. 1, when a link fails, the adjacent nodes (nodes A and E in FIG. 1) will detect the failure. Node A will transmit failure notification 1 to node B which will forward the failure notification to node B's control plane 12′. After processing the failure notification, node B will forward the failure notification to node C, which will process the failure notification at its control plane and then forward the failure notification to node D. This process repeats at each node on the ring until the failure notice reaches Node E. In the opposite direction, node E will generate a failure notification 2 and transmit it to node D. Node D will process the failure at its control plane and forward it to C. This process repeats itself as message 2 works its way around the ring. The two failure notifications 1, 2 will thus counter-propagate around the ring to allow all nodes on the ring to be notified of the failure and properly scope the failure to being that of the link.
At each hop, the network element will process the message in its control plane before forwarding the failure notification on along the ring. Since the network cannot converge until the nodes have all received the notification, the amount of time it takes to propagate fault notification messages may be a significant contributor to the overall recovery time of the network. Thus, it would be advantageous to provide a method and apparatus for enabling the rapid exchange of control information in a link state protocol controlled network.