The Global Positioning System (“GPS”) is a U.S.-owned utility that provides users with positioning, navigation, and timing services. This system consists of three segments: the space segment, the control segment, and the user segment. The U.S. Air Force develops, maintains, and operates the space and control segments.
As with the Internet, GPS has emerged as an essential element of the global information infrastructure. Thousands of applications affecting every aspect of modern life utilize GPS technology, from cell phones and PDA's, to bulldozers, shipping containers, and ATM's. In particular, hand-held PDA based tools have emerged as economical location and navigation tools available to everyday consumers, making accurate geo-positioning common place. And, GPS remains critical to U.S. national security with GPS devices integrated into virtually every facet of U.S. military operations. Nearly all new military assets—from vehicles to munitions—come equipped with GPS capability. Hence, GPS remains critical to U.S. national security.
GPS consists of three primary segments: a space segment, a control segment, and a user segment. The U.S. Air Force develops, maintains, and operates the space and control segments. The user segment is essentially the users of the GPS system. GPS satellites broadcast signals from space, and each GPS receiver (e.g. a cell phone or PDA) uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time. The space segment is composed of 24 to 32 satellites in medium Earth orbit and also includes the payload adapters to the boosters required to launch them into orbit. The control segment includes a master control station, an alternate master control station, and a host of dedicated and shared ground antennas and monitor stations.
The user segment is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service (“PPS”) and tens of millions of civil, commercial, and scientific users of the Standard Positioning Service. The Standard Positioning Service is less precise than the military PPS, but the PPS has limited accessibility.
A critical hardware component in utilizing GPS is a GPS receiver. The receiver calculates its position on the Earth by precisely timing signals sent by GPS satellites positioned in their orbits above and around the Earth. Each satellite continually transmits messages that include (1) the time the message was transmitted; and (2) satellite position at time of message transmission. The receiver hardware uses the messages it receives to determine the transit time of each message and computes the distance to each satellite using the speed of light. Each of these distances and satellite locations define a sphere. The receiver is on the surface of each of these spheres when the distances and the satellite locations are correct. These distances and satellite locations are used to compute the location of the receiver using well known navigation equations. This location information is then provided to a location services application which, in turn, provides this information to any other applications running on the device that is (typically) physically combined with the receiver that have a need for the location information. For example, the location information might be utilized by a simple display application to display the location in terms of latitude and longitude coordinates on the device's display, or an application might display a location icon on a moving map display, or an application might provide elevation information to a user of the device. Many GPS devices, such as cell phones and PDAs, use the location information to calculate and show more refined information such as compass heading and speed of the device which can be calculated from position changes over time of the device. Banking ATMs also rely on GPS information for providing accurate time-stamps of financial transactions, such as the dispensing of cash. So, as can be understood, a myriad of applications running a plethora of devices would be requesting and obtaining location information from the receiver at any moment during the operation of the device.
GPS receivers operate on a “line-of-sight” methodology, and at least four or more satellites must be visible to the device to obtain accurate location results. Four sphere surfaces typically do not intersect, but provide enough information to solve the navigation equations with fairly high level of confidence to calculate the position of the receiver and the current time. Most applications only use the location information, and bypass the time information.
For a complete understanding of the hereto-be-described invention, some additional knowledge of how GPS works is necessary. The navigational signals transmitted by GPS satellites are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: (1) a public encoding that enables lower resolution navigation, and (2) an encrypted encoding used by the U.S. military (i.e. the PPS mentioned above). GPS satellites generate “messages” having a number of “subframes”, containing the following information: (1) a clock timestamp, (2) an “ephemeris” (the precise satellite orbit from the transmitting satellite), and (3) an “almanac” (satellite network synopsis and error correction information). Each GPS satellite continuously broadcasts a navigation message on a channel referred to as L1—C/A, L2—P/Y, each transmitted at a rate of 50 bits per second. Each complete message is transmitted in 1500 bits and takes 750 seconds to transmit a complete message. Each message is tied to specific timing of the satellite clock as well as transmitting the exact time as part of the message. In order to obtain an accurate satellite location from the transmitted message, the receiver must demodulate the message for at least 18 to 30 seconds. In order to collect all the transmitted almanacs the receiver must demodulate the message for 732 to 750 seconds or 121/2 minutes.
All satellites broadcast at the same frequencies and are encoded using code division multiple access or “CDMA,” which allows for messages from individual satellites to be distinguished from each other based on unique encodings for each satellite. Five frequencies are used in GPS, but for most consumers of GPS only the first, L1, is used. The frequency of L1 is 1575.42 MHz and L2 is 1227.60 MHz. Another signal L3 is broadcast at 1381.05 MHz and is used for nuclear detonation detection; a further signal L4 is broadcast at 1379.913 MHz and is used to assist with ionospheric correction; and the last L5 broadcasts at 1176.45 MHz and is used for civilian safety-of-life signal. The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random (“PRN”) sequence that is different for each satellite. Every GPS receiver is built with knowledge of the PRN codes for each satellite in order to complete its location calculations.
As mentioned above, a GPS receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent are designated as [xi, yi, zi, ti] where the subscript i denotes the satellite and has the value 1, 2, . . . , n, where n≧4. When the time of message reception indicated by the on-board clock is tr, the true reception time is tr+b where b is receiver's clock bias (i.e., clock delay). The message's transit time is tr+b−ti. Assuming the message traveled at the speed of light, c, the distance traveled is (tr+b−ti)c. Knowing the distance from receiver to satellite and the satellite's position implies that the receiver is on the surface of a sphere centered at the satellite's position. Thus the receiver is at or near the intersection of the surfaces of the spheres. In the ideal case of no errors, the receiver is at the intersection of the surfaces of the spheres. The clock error or bias, b, is the amount that the receiver's clock is off. The receiver has four unknowns, the three components of GPS receiver position and the clock bias [x, y, z, b]. As are known, the equations of the sphere surfaces are given by:(x−xi)2+(y−yi)2+(z−zi)2=([tr+b−ti]c)2, i=1,2, . . . ,n 
or in terms of “pseudoranges,” pi=(tr−ti)c, aspi=√{square root over ((x−xi)2+(y−yi)2+(z−zi)2)}{square root over ((x−xi)2+(y−yi)2+(z−zi)2)}{square root over ((x−xi)2+(y−yi)2+(z−zi)2)}−bc, i=1,2, . . . ,n. These equations can be solved by algebraic or numerical methods, such as Bancroft's method, trilateration, or Multidimensional Newton-Raphson calculations.
UTC or “Universal Time Coordinated,” referred to in English speaking countries as Coordinated Universal Time, is the accepted standard by which most of the world regulates timestamps, clocks, and network computing. For example, Network Time Protocol (“NTP”) was designed to synchronize the clocks of computers over the Internet and encodes times using the UTC system. The UTC standard was officially adopted in 1961 by the International Radio Consultative Committee with the efforts of several national time laboratories. UTC is based on International Atomic Time (TAI), a time standard calculated using a weighted average of signals from atomic clocks located in national laboratories around the world. UTC differs from TAI only in that leap seconds are added to the time to match Earth's orbital rotation. Almost all time zones around the world are expressed as positive or negative offsets from UTC, and GPS satellites set their precise internal clocks derived from UTC.
The National Institute of Standards and Technology (“NIST”) develops technologies, measurement methods and standards that help U.S. companies compete in the global marketplace. Congress created NIST in 1901 at the start of the industrial revolution to provide the measurement and standards needed to resolve and prevent disputes over trade and to encourage standardization. NIST also maintains the official US time as UTC(NIST), which is the coordinated universal time scale maintained at NIST. The UTC-(NIST) time scale comprises an ensemble of cesium beam and hydrogen maser atomic clocks, which are regularly calibrated by the NIST primary frequency standard. The number of clocks in the time scale varies, but is typically around ten. The outputs of the clocks are combined into a single signal by using a weighted average. The most stable clocks are assigned the most weight. The clocks in the UTC-(NIST) time scale also contribute to the TAI and UTC as part of the world average, upon which GPS satellites rely. UTC-(NIST) serves as a national standard for frequency, time interval, and time-of-day. It is distributed through the NIST time and frequency services and continuously compared to the time and frequency standards located around the world.
While most clocks are synchronized directly to UTC, the atomic clocks on the satellites are set to GPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with TAI (e.g. TAI−GPS=19 seconds), and the GPS navigation message includes the difference between GPS time and UTC.
Periodic corrections are performed on the satellite on-board clocks to keep them synchronized with ground clocks. As of July 2012, GPS time is 16 seconds ahead of UTC because of the leap second added to UTC Jun. 30, 2012. GPS receivers subtract this offset from GPS time to calculate UTC and specific time zone values. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits) that, given the current period of the Earth's rotation (with one leap second introduced approximately every 18 months), should be sufficient to last until approximately the year 2300.
Since the advent of GPS, relatively precise time stamps are available from commercial GPS receivers. Since GPS receiver functions by precisely measuring the transit time of signals received from several satellites, precise timing is fundamental to an accurate GPS location. As indicated above, the time from an atomic clock on board each satellite is encoded into the radio signal, and the receiver determines the time transit for each received the signal. To do this, a local clock in a GPS device is corrected to the GPS atomic clock time by solving for three dimensions and time based on four or more satellite signals and updating its own clock time. Improvements in GPS processing algorithms lead many modern low cost GPS receivers to achieve better than 10 meter accuracy, which implies a timing accuracy of about 30 ns, and GPS-based laboratory time references routinely achieve 10 ns precision. Hence, GPS enabled devices have access to precise time and can, therefore, generate their own accurate timestamps for processing transactions.
As accurate as GPS can be, many factors exist to degrade the accuracy of GPS calculations. For example, atmospheric variances (temperature and pressure variances at differing altitudes and locations) can cause inaccuracies in the signal reception time, and objects may interrupt message transmissions since GPS reception is line-of-sight dependent. So, GPS receivers use a plethora of techniques, to improve on the accuracy of its location calculations. One technique is receiver autonomous integrity monitoring or “RAIM.” RAIM detects faults within GPS pseudorange measurements, specifically, when more satellites are available than needed to produce a position fix, some pseudoranges that differ significantly from a statistically expected value, referred to by the statistical label of an “outlier,” are excluded from the position calculations in the receiver to improve location precision. In some instances these outlier situations are caused by satellite signal integrity problems, like ionospheric dispersion, or signal interference, or errors in orbital path expectations. Traditional RAIM uses fault detection to provide a notice of a fault to the user, or provide exclusion situations to the receiver to enable the receiver to continue to operate in the presence of a GPS failure. The exclusion test is a statistic function of the pseudorange measurement residual (i.e. the difference between the expected measurement and the observed measurement). The statistic is compared with an error threshold value to determine if an actual fault has occurred and then position calculations are adjusted based upon curtained time limited exclusion rules. Hence, when RAIM is integrated into a GPS receiver, GPS satellite availability is based upon performance factors in calculating a position of the receiver.
Another technique is “differential correction.” Differential correction techniques are used to enhance the quality of location data gathered to determine the receiver's position. The differential correction can be applied in real-time directly in the receiver's location processing or, after the fact, with post processing in a laboratory of office environment. The underlying idea of differential GPS is that any two receivers that are relatively close together will experience similar atmospheric errors. If a second GPS receiver is set up on a precisely known location that GPS receiver can act as a base or reference station to the first GPS receiver (aka the “roving” receiver). The base station receives the same GPS signals as the roving receiver but instead of working like a normal GPS receiver it uses its known position to calculate timing with the equations backwards. The base station calculates the travel time of the GPS signals to what they should be, and compares it with what they actually are. The difference is an “error correction” factor, and the receiver transmits this error information to the roving receiver so it can use it to correct its measurements, which allows the roving receiver to correct its own calculations in real-time. By this process, virtually any GPS receiver with a known precise location can be utilized as a base station, as long as a high-speed broadband Internet connection is available between the receivers so that the error correction information can be transmitted. For example, a cell tower, a Wi-Fi access point, or a radio beacon call all be used as base station for a GPS enabled device.
Satellite orbital geometry can also affect the accuracy of GPS positioning and can magnify or lessen other GPS errors. This effect is called Geometric Dilution of Precision (GDOP). GDOP refers to where the satellites are in relation to one another, and is a measure of the quality of the satellite position. The wider the angle between satellites, the better the measurement. Many GPS receivers have the ability to selectively utilize signals from satellites that provide the best certainty of information based upon this idea.
GPS receivers usually report the quality of satellite geometry in terms of Position Dilution of Precision, or PDOP. PDOP refers to horizontal (HDOP) and vertical (VDOP) measurements (latitude, longitude and altitude). A low DOP indicates a higher probability of accuracy, and a high DOP indicates a lower probability of accuracy. A PDOP of 4 or less is excellent, a PDOP between 5 AND 8 is acceptable, and a PDOP of 9 or greater is poor. Another term you may encounter is TDOP, or Time Dilution of Precision. TDOP refers to satellite clock offset. On some GPS receivers you can set a parameter known as the PDOP mask. This will cause the receiver to ignore satellite configurations that have a PDOP higher than the limit you specify and, in theory, improve position accuracy.
Another method used to improve location accuracy is carrier phase tracking. A GPS receiver determines the travel time of a signal from a satellite by comparing the “pseudo random code” it generates, with an identical code in the signal from the satellite. The receiver translates this code pattern back in time until the pattern becomes synchronized with the transmitted satellite code. The amount of translation time required to synchronize the two codes equals the signal's travel time, which is done for each satellite continuously. However, bits (or cycles) of the pseudo random code are relatively wide apart so that even when the codes are synchronized several meters of inaccuracy can remain in calculating a position.
A solution to this inaccuracy is called carrier phase tracking. The period of the GPS carrier frequency multiplied by the speed of light gives the wavelength, which is about 0.19 meters for the L1 carrier. Carrier phase tracking enabled receivers enhance the accuracy of time calculations between the receiver and the satellite by using pseudo random code synchronization to a point and then make further refined measurements based on phase variances using the carrier frequency for that code. The L1 carrier frequency is much higher than the random code frequency so its pulses are much closer together and therefore more accurate. Accuracy within 1% of wavelength in detecting the leading edge of the carrier will reduce the pseudorange code error to as little as 2 millimeters.
However, even given all of the augmentation technology and sophistication of GPS, many conventional geo-positioning systems still suffer from inaccuracy because, their onboard chipset clocks are not accurate enough to be able provide precise geo-positioning calculations. Even when these chipsets are corrected using calculations from signals received from the GPS satellites, their accuracy inhibits the level of precision required by many emerging software applications. Further, the same situation can occur with base stations as discussed above and limit the value of differential error data. Moreover, without sufficient timestamp precision, variations in GPS satellite time data cannot be verified and inaccurate satellite GPS data cannot be excluded as per a RAIM technique.
Hence, what is needed is a process for updating a universal timestamp in a GPS enabled device in real-time and utilize that time stamp to enhance the precision of GPS calculations in the GPS enable device while utilizing commercially standard clock hardware.