Optical network control planes provide automatic allocation of network resources in an end-to-end manner. Exemplary control planes may include Automatically Switched Optical Network (ASON) as defined in G.8080/Y.1304, Architecture for the automatically switched optical network (ASON), the contents of which are herein incorporated by reference; Generalized Multi-Protocol Label Switching (GMPLS) Architecture as defined in Request for Comments (RFC): 3945 and the like, the contents of which are herein incorporated by reference; Optical Signaling and Routing Protocol (OSRP) from Ciena Corporation which is an optical signaling and routing protocol similar to PNNI (Private Network-to-Network Interface) and MPLS; or any other type control plane for controlling network elements at one or more layers, and establishing connections there between. As described herein, these control planes may be referred to as control planes as they deal with routing signals at Layers 0, 1, and 2, i.e., photonic signals, time division multiplexing (TDM) signals such as, for example, Synchronous Optical Network (SONET), Synchronous Digital Hierarchy (SDH), Optical Transport Network (OTN), Ethernet, MPLS, and the like. Control planes are configured to establish end-to-end signaled connections such as sub-network connections (SNCs) in ASON or OSRP and label switched paths (LSPs) in GMPLS and MPLS. Control planes use the available paths to route the services and program the underlying hardware accordingly.
In operation, an optical network operating a control plane includes interconnected network elements that exchange information with one another via control plane signaling. As such, each network element has a routing database with up-to-date information about the network, e.g., topology, bandwidth utilization, etc., that enables path computation at a connection's source node (i.e., an originating node). Once a path is computed for the connection, the source node sends a setup message with validation criteria to each node in the computed path, i.e., any intermediate nodes along to a destination node (i.e., a terminating node). Note, the computed path is determined based on the information in the routing database of the source node, but the connection may have other criteria that needs to be validated at each other node in the computed path, and this validation criteria information is not necessarily available to the source node. The source node defines what criteria the intermediate nodes need to validate and examples of validation criteria can include available bandwidth on aggregated links, available bandwidth to resize Optical channel Data Unit-flex (ODUflex) connections, route diversity, etc. That is, the validation criteria can be anything that the source node is not able to detect locally during the path computation based on information stored in the source node's routing database. For example, information which is prone to change frequently over time may not be flooded in the control plane.
Conventionally, if there is more than one node in the path which does not meet the validation criteria of a connection, the first node when the validation criteria is not met issues a release or crankback from the setup and the source node gets feedback only from the first point of failure. Since there is at least one more node in this path that fails the validation criteria, there is an increased probability that the next setup or other action might fail on a second node which also does not satisfy the validation criteria. For example, after two consecutive failures, an SNC goes into back off which can cause a traffic hit of at least 1 sec. Avoiding the second release or crankback would be a great advantage over the conventional approach.