Deploying stand alone global positioning satellite (GPS) receivers at each base transceiver station (BTS) cell-site is a technique that has long been used to provide precision time synchronization having accuracy better than 1 microsecond (μs). For example, in the Code Division Multiple Access (CDMA) networks, the deploying of stand-alone GPS receivers at each cell-site has been used for over 15 years.
The Global Positioning System is a space-based global navigation satellite system (GNSS) that allows inferring reliable location and time information in all weather at a GPS receiver on the surface of the Earth, if there is an unobstructed line of sight from the GPS receiver to plural satellite vehicles (SVs). A GPS signal includes a number of parameters specific to the SV emitting the GPS signal, which parameters can be used to compute the precise location of the GPS receiver and a precise timing offset of the GPS receiver's clock to a time reference traceable to a common timebase. Thus, the GPS receiver can synchronize its timing with a network of GPS receivers, relative to a common timebase. A GPS receiver that is demodulating GPS signals received from plural SVs such that precise timing can be extracted is said to be locked.
In older systems, the antenna component of a GPS receiver had to have an unobstructed view of the SVs, for example, to be located on top of high buildings. Traditionally, there were two performance figures of merit regarding link margin of a GPS receiver: the acquisition sensitivity and tracking sensitivity. The acquisition sensitivity is the minimum signal quality carrier to noise ratio (C/No) required to demodulate and lock to a GPS signal from power-on; the tracking sensitivity is the minimum C/No required to maintain lock after the GPS signal acquisition has been achieved. The tracking sensitivity benefits from the availability of the SV parameters in the GPS signal demodulated by the GPS receiver. Another figure of merit associated with acquisition is the time required by a GPS receiver to achieve lock, this time being often referred to as the Time-to-First-Fix (TTFF). An Aided-GPS (A-GPS) operation uses network resources to send SV parameters to the GPS receiver in order to improve the acquisition sensitivity and the TTFF. This is of a benefit in poor signal conditions, for example in a city, where the GPS signals may suffer multipath propagation due to bouncing off buildings, or may be weakened by passing through various materials. Additionally, the C/No can be easily degraded by the presence of interfering jamming signals (intentional or not) due to the extremely low power of the GPS signal. The A-GPS technology has yielded a substantial improvement in the acquisition sensitivity, allowing the use of GPS receivers in more convenient physical location, e.g., inside of buildings. In an A-GPS receiver located in a degraded signal environment, an improved acquisition link margin offsets penetration losses and other degradation impairments of the GPS signals. However, since the penetration losses impairments are difficult to predict exactly, there is an increased uncertainty as to whether the GPS link margin will be (and will remain) adequate for a given deployment. Thus, an adequate margin for a particular deployment cannot be guaranteed. Additionally, the accuracy of the timing extracted from a degraded signal environment may also be degraded.
To summarize, although the A-GPS technique improves the GPS signal acquisition link margin, capitalizing on the A-GPS performance and relaxing the GPS antenna deployment provisioning rules, the A-GPS technique introduces an unacceptable uncertainty in the resultant link margin that limits its applicability in telecom products.
Packet-based synchronization methods such as the ones set forth in the IEEE-1588 standard have recently promised to substantially reduce the cost and improve the reliability of precision time synchronization. The predominant architecture associated with packet-based synchronization is to deploy a few timing servers (masters) within a network, the timing servers distributing timing to hundreds of clients (slaves). The timing servers are usually network devices distinct from the base stations (BTSs).
FIG. 1 illustrates packets messages involved with the IEEE-1588 (the January 2011 version of which is incorporated herewith by reference) method of transferring time synchronization between a master 10 and a slave 20, the sequence of operations being represented by via downwards time lines. The master 10 sends a SYNC message and embeds a master egress time (T1) according to the master clock in the SYNC packet's payload. The slave 20 receives the SYNC packet and marks a slave ingress time locally (T2) according to the slave clock. The slave 20 then sends a DELAY_REQUEST message (marking a slave egress time as T3 according to the slave clock). The master 10 marks a master ingress time (T4) of the DELAY_REQUEST message according to the master clock, and then sends a message DELAY_RESPONSE embedding T4 in the DELAY_RESPONSE packet's payload. The master egress time (i.e., the timestamp) T1 may be conveyed with a message called a FOLLOW_UP, according to a method referred to as a two-step clock. The SYNC and DELAY_REQUEST messages are termed “Event” messages since their delivery is time-stamped at both egress and ingress, whereas the FOLLOW_UP and DELAY_RESPONSE messages are referred to as “General” messages. Messages may be transported on a variety of communication protocols, for example, as Ethernet packets. The interval T4-T1-(T3-T2) represents the round trip propagation delay, which may be considered twice the single propagation delay (Tprop). Once the propagation delay (Tprop) is known by the slave 20, Tprop can be removed from T1 to synchronize the slave clock with the master clock. The key impairment to accurate synchronization over Ethernet networks is packet delay variation that may occur when a packet carrying an Event message encounters queuing delay.
The transfer of synchronization packets may convey frequency information and timing information. For the frequency information, only one-way communication is necessary, whereas for timing information two-way communication is required. Thus, in order to convey frequency information, reception of SYNC packet would suffice.
Since timing servers are expensive, they are typically deployed to serve a large number of clients. A fundamental problem with packet-based methods is that controlling the packet delay variation (PDV) over a large number of hops (which large number is inherent in this architectural model) is difficult without deploying specialized switching nodes that account for the internal packet delay. The PDV is a key metric to the delivery of adequate time synchronization accuracy.
Accordingly, it would be desirable to provide devices, systems and methods that avoid the afore-described problems and drawbacks.