Synchronization among individual logic chips within a computer is essential to error free execution of software instructions. Such synchronization has traditionally been accomplished through the use of a clocking system. This system works as follows: A master clock generates alternating high and low voltage signals (interpreted as logical 0's or 1's) at fixed intervals. The signals are buffered through some form of repower tree in order to boost their strength, and then distributed over wires to the individual logic chips. In effect, the rate at which data is processed by the entire computer logic system is set by the rate at which the clock signal pulses.
Theoretically, the only limit on the processing speed of a computer system is that electrical signals cannot travel faster than the speed of light (0.3.times.10.sup.9 ms.sup.-1). In reality, however, there are other limits. Impedance inherent in circuit board traces slows down the signals. The amount of impedance (and therefore delay) can vary from trace to trace, due to inevitable variations in length, materials, and manufacturing processes. To account for this, computer engineers are forced to choose trace segments that are of uniform length and to adjust the distance between logic chips to offset the variations in impedance.
The processing that occurs within each logic chip also creates delays between receiving inputs and generating outputs. The amount of this delay varies from chip to chip based on its particular characteristics. Variations in the resistance and capacitance loads, temperature variations, as well as material defects can account for the discrepancies among logic chips. It is even possible that two chips manufactured from identical materials, using identical techniques may differ in the amount of delay that signals experience when they reach the chips. Thus, even if two signals arrive at two chips at the same time, the resulting outputs may occur at slightly different times.
It is important that all of the logic chips experience a "high" or "low" condition (generated by the master clock) at the same or nearly the same instant. In other words, they must be operating in synchronization, or in the same phase of the clock cycle. Otherwise, chips will clock data to other chips faster or slower than the receiving chips are prepared to accept. When chips receive timing signals out of sync (synchronization) with data signals, the resulting condition is clock skew.
Clock skew can occur for several possible reasons. Usually, it happens when the timing signal takes a longer path than the data signal. The clock signal could also become skewed if the data buffer chips have different delays than the clock buffer chips. If the chips become far enough out of sync, data may be lost, and critical instructions not processed, resulting in a system failure. This has become an issue in high performance systems, where timing parameters have become much tighter.
In order to determine the proper length for transmission lines, proper placement of logic chips, and the maximum possible processing speed for a given computer architecture, a large amount of simulation with Computer Aided Design (CAD) software is required. Since the actual delay time inherent in the transmission lines and logic chips is never actually known prior to manufacture, simulation of computer architecture must account for a range of possible signal delay times. A worst and best case scenario for every stage of processing in the architecture must be taken into account. The final result of the simulation is a worst and best possible processing speed for the entire system. This now becomes the slowest and fastest possible clock speeds.
There are three basic shortcomings in this approach to chip synchronization. The first is the amount of tuning required to make the system successful at higher clock speeds. In high end systems, or supercomputers, this may necessitate a manual operation wherein technicians tune individual cables in order to line up all of the various sources of delay within the system. This is an expensive procedure to perform on a production basis. The second is that the actual distribution of the clock pulses may compromise the tuning already performed on the system. In order to minimize the effect of clock pulse distribution on a tuned system, it becomes necessary to minimize the number of connectors, cables and other packaging elements. The result is that it is difficult to build a high performance system in anything other than planar (two dimensional) format. The third shortcoming is that the manufacturer never really knows the maximum clock speed for any particular machine. Only the general case is known. Thus, there is no way to speed up performance by taking advantage of better than worst case construction.
To accomplish chip-to-chip synchronization, the chips ideally should be aligned to both the system clock and to each other, and should be realigned during operation to reflect changing loads, impedance, and temperature. This would also allow for higher tolerances in clocking speed, as the system would reflect more accurately the actual delay times. A dynamic method of on-chip timing adjustment could accomplish this task. Such a method would require circuits capable of detecting the amount of delay occurring in the system, and creating internal delays to compensate. One way to sense the amount of clock signal delay being experienced is to compare the phase of the master clock signal to a feedback signal from a distant chip. If the distance between the two chips (i.e., the chip sending the clock signal and the distant chip) is known, then the circuit designer will be able to determine how much difference, under ideal conditions, should exist between the phase of the reference signal and the feedback signal from the distant chip. For example, if the distance between the two chips is such that it takes a signal one clock cycle to travel from the source chip out to the distant chip and return as feedback, then under ideal conditions (with no delay), the master clock signal should be in the same phase as the feedback sense signal. That is, the master clock signal and the feedback signal should rise to a logical state of `1` and drop to a logical state of `0` at the same time.
There is therefore a need for a system and method for comparing a reference clock signal with a feedback signal in order to determine whether they are synchronized. If the signals are not synchronized, then the aforesaid system and method should be able to determine which signal is leading. This would allow the design engineer to take advantage of actual delay times in the system, as opposed to theoretical ones, thus allowing for a potentially faster clock speed. The result of such a system and method would be decreased processing time.