The most cost effective form of computer system random access memory continues to be the conventional dynamic random access memory (DRAM). At the same time, system bus speeds continue to outpace conventional DRAM cycle times. To overcome the limited speeds of DRAMs it is well known in the art to use a cache RAM system employing one or more levels of fast RAM devices between the microprocessor and the system RAM in order to increase system performance.
Cache RAMs are typically composed of static random access memories (SRAMs). SRAMs have faster cycle times than DRAMs and eliminate the need to refresh memory cells. However, as microprocessor clock speeds push well past 100 MHz, it is increasingly difficult to produce SRAMs having fast enough cycle times using conventional design approaches.
SRAMs include both asynchronous SRAMs and synchronous SRAMs. As is well understood in the art, asynchronous SRAMs are self-timed, typically by detecting a change in the external address received from the system. In contrast, synchronous SRAMs operate in synchronism with an external clock. This can be particularly advantageous in burst read and/or write operations where a series of memory locations are accessed in response to a single external address.
U.S. Pat. No. 5,126,975 issued to Handy et al. on Jun. 30, 1992 discloses a synchronous SRAM having a burst read and a burst write capability. The external address is latched on a clock edge to generate an internal address, this address is then applied to an SRAM array.
An example of a synchronous DRAM is disclosed in U.S. Pat. No. 5,341,341 issued to Yukio Fukuzo on Aug. 23, 1994. The synchronous DRAM has three operating sections; an addressing section (ADD), a data accessing section (RAMP), and a data read-out section (ROUT). These sections are pipelined according to one of three modes. In the first mode, during each clock edge, data are passed from one section to the next. In a second mode, the second section follows from the first, without clocking. In a third mode, the first, second and third sections follow sequentially from each clock edge.
It is known in the prior art to reduce cell access times by carefully executed timing sequences. U.S. Pat. No. 4,845,677 issued to Chappell et al. on Jul. 4, 1989 discloses a memory chip having a number of sub-arrays having local decoding and precharging. Block-to-block self-timing is used in the critical paths of the circuit.
Commonly owned, co-pending U.S. patent application Ser. No. 514,693 entitled TIMING CONTROL CIRCUIT FOR SYNCHRONOUS STATIC RANDOM ACCESS MEMORY, now U.S. Pat. No. 5,559,752, discloses a timing control circuit that includes a read/write sequence for sequentially activating sense circuits in an I/O path to access a memory cell. In a reset sequence the I/O path is pre-charged and equalized. The reset sequence is initiated before the data has completely propagated through the I/O path, reducing the overall cycle time of the device.