Clock duty cycle error is typically corrected in a continuous servo loop. In a conventional implementation, an error detector generates a continuous stream of error direction signals indicating which of the high and low clock phases is longer, and an adjustment circuit incrementally adjusts the clock duty cycle in response to each error direction signal.
While the continuous servo approach works acceptably in some applications, shrinking voltage supply headroom, decreasing output impedance, and increasing sub-threshold leakage are becoming problematic for traditional analog error detector implementations. Beyond the implementation challenges, continuous error detection and correction exacts a cost from increasingly limited power budgets, and the slow, incremental duty cycle adjustment is becoming a performance bottleneck in the face of increasingly complex clocking schemes employed in modern electronics devices (e.g., on-demand frequency transitions, and clock start/stop).