Memories like SRAM (static random access memory) have large number of memory cells arranged in arrays. A particular memory cell inside an array is typically selected by a wordline and a pair of bitlines. The wordline is typically connected to one or more control gates of every memory cell in a row. In case the control gates are made of NMOS transistors, all the memory cells are turned on when the wordline connected thereto turns to a high voltage, i.e., to be activated. The bitline pair is typically connected to storage nodes of every memory cell in a column to a sense amplifier. The memory cell at the cross point of the activated wordline and the selected bitline pair is the one that is selected.
Memories are conservatively designed to provide enough read margin for reading even the weakest selected bits (e.g., 6-sigma weak-bit at FF/SSG/m40 corner). These weak bits will typically provide about half of the cell current of a normal bit cell, which means that twice as much time is needed to develop the necessary read margin for distinguishing the bit as a logical “1” or “0”. For those SRAM macros (memory dies) that do not have these worst-case scenario weak bits, the performance suffers from this conservative design. One approach to addressing this problem involves providing multiple, selectable (depending on whether a weak bit is present or not) timing loops for generating an internal clock reference. However this approach requires complex finite state machine control circuitry and has significant area penalties. A second approach involves providing extra selectable logic delays, typically including long inverter chains. However this approach also has significant area penalties as well as poor tracking ratio and a narrow tuning range.