Fiber optic communication networks are experiencing rapidly increasing growth in capacity. This capacity growth is reflected by individual channel data rates, scaling from 10 Gbps (gigabits per second), to 40 Gbps, to developing 100 Gbps, and to future projections of 1000 Gbps channels and beyond. The capacity growth is also reflected by increasing total channel count and/or optical spectrum carried within an optical fiber. In the past, optical channels were deployed with a fixed capacity in terms of bandwidth as well as a fixed amount of overhead for forward error correction (FEC). For example, in a conventional system deployment, channels are deployed at 10 Gbps or 40 Gbps (plus associated overhead for FEC). These channels are designed to provide fixed data throughput capacity of 10 Gbps or 40 Gbps. Moreover, the performance limits of these channels are established assuming that the system is operating at full capacity, with all the optical channels present. The first in channels will operate in much more benign condition and have significant extra margin available. This margin is not utilized until much later in the life cycle of the system. For example, a single wavelength deployed on a new optical line system could have more than 10 dB of excess margin that is not currently utilized (without adding new hardware). This un-used margin can be considered wasted and forcing the system to operate in a non-cost effective way. If this extra margin could be utilized, even in a temporary way, to enhance data throughput of the modem for example, the economics of the system would be significantly improved.
Of note, next generation optical modems are equipped with the capability to support variable data throughput applications. Moreover, this capability will be provisionable. Therefore, depending on the opportunity, it would be advantageous to provision a modem at a higher data throughput when extra margin is available on new and low channel count deployments, usage of these next generation modem will allow to mine and use this excess margin and wasted capacity without requiring additional hardware. However, this excess margin will disappear as the channel counts approach full fill. It would be advantageous to have systems and methods for managing excess optical capacity and margin in optical networks in view of the above.
Fiber optic communication networks today are pushing up against the Shannon Limit within the non-linear tolerance of the transponder technology currently in use. There is great interest in providing the best spectral efficiency possible, which is leading to the development of adaptive modulation techniques applied to fiber optic transmission. In wireless and Digital Subscriber Loop (DSL) technology, it is quite common to use adaptive modulation schemes which adapt to link conditions, e.g. High-Speed Downlink Packet Access (HSDPA) and Asymmetric digital subscriber line (ADSL2+). In optical, some latest generation transponders on the market are capable of changing modulation scheme, e.g. Ciena's WaveLogic3 family. Transponders in the future will be able to change modulation scheme more quickly, and may be optimized to do so. However, today's systems cannot take advantage of these in a hitless manner.
Although wireless and DSL technologies can react to channel conditions using adaptive modulation, the system constraints in fiber optic communication networks would not allow similar system techniques to work. In particular, there are two assumptions built into the algorithms used in these systems for wireless and DSL technologies. First, the data which is transported in both cases (wireless and DSL technologies) is bursty in nature, and the actual user data throughput vs. actual bit rate can be controlled. In wireless communication, the application layer is visible to the controller. In other words, the systems are designed to allow for periodic optimization where the resulting changes can be in modulation scheme. Optical transport networks are the core of the data network and as such see a rather continuous flow of traffic due to multiple levels of multiplexing and grooming. There are many sources of this data and the volume is also very high, so hold-offs on data transmission become too complex and/or expensive to implement.
Second, another simplifying condition is that the transmission time (distance) from modulator to demodulator is small compared to the baud rate of the transmission. In practical terms, for example, for HSDPA networks in a 5 km cell and the baud rate of 5 Mbps, there are 8.3 baud in flight at any time. In an optical network using 100 Gbps over 2000 km, there is 440 million baud in flight. This nearly 8 orders of magnitude difference represent a key difference in how to perform such changes. Although there are some transponder on the market today which can change the operational state to accommodate a different spectral efficiency which may be allowed by the link conditions, these cause some length of outage which can only be managed out of service or as a failure in the system, both of which have negative effects on the system, and drive operational complexity for the end user. Treating it as an outage may cause higher level protocols to attempt to recover from the failure, leaving the system vulnerable to further failures. The treatment as a failure may also cause re-transmission of data, etc. With the vast amount of data involved, this is simply unacceptable.
For example, using conventional optical modems, such as the WaveLogic3, testing was performed to switch in-service between Quadrature Phase Shift Keying (QPSK) and 16-Quadrature Amplitude Modulation (16-QAM). The switch requires several seconds because operating conditions on the line including non-linear impairments have to be calculated. Again, it is expected that the switch can be optimized, but likely not on the order of several milliseconds.