Time and/or frequency distribution is a fundamental requirement for packet networks. One of the biggest hurdles packet networking technologies face in replacing traditional Time Division Multiplexing (TDM) systems in both core and access networks is the transmission of accurate timing information (time and/or frequency). Legacy TDM networks were designed to carry precise frequency synchronization throughout their respective networks. But increasingly, access systems such as wireless base stations and multi-service access nodes (MSANs) require synchronization delivered over a network backhaul connection for basic connectivity and assurance of high quality of service to end user applications. A key dependency in the evolution to Ethernet backhaul in telecommunication networks is an ability to deliver carrier-grade (time and/or frequency) synchronization over Ethernet to remote wireless base stations and access platforms.
In telecommunication networks, remote and access TDM network elements with their embedded reference oscillators have traditionally recovered synchronization from TDM backhaul connections. As long as the TDM transmission network was traceable to a Primary Reference Clock (PRC), the remote and access elements could employ relatively simple Phase-Locked Loops (PLLs) to lock their oscillators to a PRC traceable backhaul feed. However, a problem occurs when a backhaul connection transitions to Ethernet, thus isolating the remote and access elements from their source of synchronization. While Ethernet has proven to be a useful, inexpensive, and ubiquitous technology for connectivity, it has not been well suited for applications requiring precise synchronization. By nature, it is asynchronous, which creates difficulty for real-time or timing sensitive applications that require synchronization.
Two principal sources of timing errors must be eliminated to provide high quality (sub-microsecond level) synchronization of clocks. The first is timing errors introduced by instabilities and drift of local oscillators, and the second is fluctuations in path delay (commonly know as delay variation) between transmitter and receiver clocks. Oscillator stability is primarily a component selection issue for a system designer. Employing a high-stability oscillator reduces measurement noise and improves the ability of a receiver clock synchronization mechanism to filter out transmission wander and jitter caused by network impairments. The primary sources of delay variations are due to Layer 2 and higher impairments such as queuing delays in network devices, media contention delays, software protocol stack processing delays, operating system and other software tasks delays, etc. Delay variation significantly degrades clock synchronization because it introduces variability to the travel time of timing protocol messages. At Layer 2 and higher, regardless of whether a network is lightly or heavily loaded, messages are short or long, or whether network equipment uses priority queuing or not, the potential for protocol messages to experience delay variations still exists. Timestamp filtering and minimum delay screening and selection of messages at end nodes, in addition to the use of robust clock synchronization algorithms, can help mitigate this problem to some extent, but this depends on traffic load level along a message communication path.
The rationale for minimum delay screening and selection of messages at end nodes is that delay variation on a communication path at Layer 2 and higher (Layer 2+) will have a probability distribution function with a “floor” or intrinsic minimum. The floor is a minimum delay that a packet (or a timing protocol message) can experience in a given network path. This floor may be viewed as a condition where all queues along the network path between a transmitter and a receiver are near their minimum when the particular packet is transmitted. Under normal non-congested loading conditions on the network path, a fraction of the total number of packets will traverse the network at or near this floor, even though some may experience significantly longer delays. Under these non-congestion conditions, store-and-forward operations in high-speed devices effectively become forwarding efforts with packets forwarded with minimum delay. In addition, the delay variation distribution becomes more concentrated near this floor, with a relatively large fraction of the total packets experiencing this “minimum” or “near minimum” delay. However, a major limitation of this approach is that, at higher loads, minimum delay screening and selection of messages at end nodes will simply produce poor clock quality since a very small fraction of timing messages will experience the minimum “intrinsic” propagation delay of the network path.
In view of the foregoing, it may be understood that there may be significant problems and shortcomings associated with current clock synchronization technologies.