Typical integrated circuits (ICs, or “chips”) include large numbers of synchronous storage elements sharing a common clock signal. Ideally, each signal edge of the common clock signal arrives at each destination simultaneously. In practice, however, this ideal is difficult to achieve. The extent to which a propagating clock signal arrives at different destinations at different times is commonly referred to as “clock skew.” In general, clock skew is the maximum delay between clock-edge arrival times between two or more clock destination nodes.
Clock distribution networks are routinely modeled and simulated to minimize clock skew, or “nominal clock skew.” The main contributors to nominal clock skew are resistive, capacitive, and inductive loading of clock lines. Loading effects are well understood, and so can be modeled to produce effective behavioral predictions. Unfortunately, such predictions do not fully account for less predictable skew variations, such as those imposed by process, supply-voltage, and temperature variations.
Clock skew is typically minimized by balancing the signal propagation delays of the various clock paths, which involves equalizing the loads associated with those paths. In a typical example, inverters and capacitors are included along relatively fast clock paths to increase the load—and reduce the speed—of those paths. Unfortunately, adding loads to clock lines wastes power and tends to increase clock jitter.
Even if a clock network is perfectly balanced (i.e., if the clock skew is zero), the signal propagation delay through the network can vary significantly with process, voltage, and temperature (PVT) variations. Such variations can be problematic whether they increase or reduce signal propagation delay: a slow clock network reduces speed performance; a fast clock increases noise and power consumption. There is therefore a need for improved methods and systems for distributing low-skew, predictably timed clock signals.