One of the basic units of a microprocessor or similar data-processing computer is the random-access memory or RAM. In a RAM, data is stored at a plurality of address locations as one of two discrete logic levels, a logic 1 or a logic 0. A logic signal can be "read" from any of the address locations by addressing the location from which it is desired to read the stored signal. Also, a new logic signal may be written into any selected address location by the use of a similar addressing operation to select the desired address location into which the new logic signal is to be written to replace a logic signal previously stored at that location.
Two types of RAMs are in common use. One type of RAM is the dynamic RAM or DRAM. In a typical DRAM each address location includes a memory cell consisting of a single MOS transistor and a capacitor connected to the transistor. The absence or presence of a charge on the capacitor represents a stored logic 0 or a stored logic 1, respectively. During a read operation, the charge on the capacitor is distributed through the accessed transistor to the column line to which the cell is connected. The column line is connected to a sense amplifier which typically is also connected to a dummy cell and which produces an amplified signal based on the data signal on the column line. Since the charge on the capacitor will decay over time, it is necessary to periodically refresh the data in DRAM memory cells. To this end, DRAMs employ relatively complex refresh and associated clock circuits.
The other conventional type of RAM is the static RAM or SRAM, in which data is represented by the voltage levels at two data nodes in a flip-flop or latch. The latch is coupled through access transistors to the column or bit lines, which are, in turn, coupled to a sense amplifier. Because the data stored in a latch will not decay no extra refresh circuitry is required in a SRAM. The conventional SRAM cell typically requires at least six MOS transistors, as compared to the one MOS transistor and capacitor commonly employed in a DRAM memory cell.
Because of their respective memory cell arrangements as described, SRAMs and DRAMs have several major advantages and disadvantages with respect to one another. Namely, DRAMs, which require only a single MOS transistor and capacitor to form a memory cell, offer higher density since they require less area, typically one-tenth, to fabricate, and also cost less per bit of stored data as compared to SRAMs. However, because of their need for refreshing, DRAMs must use an external clock and refresh operation. DRAMs thus require many relatively complex peripheral circuits and complex timing circuits and, as a result, typically operate at a lower access time than SRAMs. SRAMs, on the other hand, are easier to use since they do not require the external clock and refresh operation, and have faster access times. Their disadvantages vis-a-vis DRAMs are their lower density and higher cost per bit.
This situation has prevailed for many years. Even with their slower operating speeds, DRAMs, as a result of their greater density and lower costs, are more often used than SRAMs. Thus, in most cases in which a random-access memory is required, SRAMs, even with their inherent higher operating speed, are not chosen by microprocessor designers because of their relatively lower density and higher costs.
Numerous workers in the field of memory design have attempted to realize some of the advantages of DRAMs in an SRAM. Examples of these efforts are described in the following articles: "A 256K CMOS SRAM with Internal Refresh", by S. Hanamura et al.; Proceedings of the 1987 IEEE International Solid-State Circuits Conference, p. 250; "Static RAMs" by Schuster et al., Proceedings of the 1984 IEEE International Solid-State Circuits Conference at p. 226; "A 30-.mu.A Data Retention Pseudostatic RAM with Virtually Static RAM Mode" by Sawada et al., IEEE Journal of Solid State Circuits, Vol. 23, No. 1, Feb. 1988; "1-Mbit Virtually Static RAM", Nogami et al., IEEE Journal of Solid-State Circuits, vol. sc-21, No. 5, Oct. 1988; and A "288K CMOS Pseudostatic RAM", by Kawamoto et al., IEEE Journal of Solid-State Circuits, Vol. sc-18, No. 5, Oct. 1984.
This situation has generally been found acceptable so long as the microprocessor operated at relatively low speeds. However, recent microprocessor designs, such as the Intel 80386, have created a need for higher-speed memories, particularly as cache memories. In one attempt to satisfy this requirement for higher-speed memories, an SRAM has been employed with a microprocessor as a cache memory in conjunction with a DRAM. The intent of this design was to take advantage of the higher operating speed of the SRAM while also employing the DRAM for its higher density and lower cost. This combined use of an SRAM and DRAM, however, also increases the complexity of the overall system with regard to the addressing and control of the two types of random-access memory employed.
It is accordingly an object of the invention to provide an SRAM memory cell which requires fewer transistors and can thus be fabricated in a smaller area.
It is a more general object of the invention to provide a memory cell for an SRAM which is more comparable to existing DRAMs in density and cost, but which operates at a higher speed than existing DRAMs, thereby allowing the use of the higher-speed SRAM in applications in which lower-cost, higher-density DRAMs have heretofore been used.