Read/write memories, also referred to as Random Access Memories (RAM) are widely used to store programs and data for microprocessors and other electronic devices. The availability of high speed, high density and low power RAM devices has played a crucial role in the price reduction of personal computers and in the integration of computer technology into consumer electronic devices.
A typical RAM includes a large number of memory cells arranged in an array of rows and columns. Each memory cell is typically capable of storing therein a binary digit, i.e. a binary ONE or a binary ZERO. Each row of the memory cell array is typically connected to a word line and each column of the memory cell array is typically connected to a pair of bit lines. Read and write operations are performed on an individual cell in the memory by addressing the appropriate row of the array using the word lines and addressing the appropriate cell in the addressed row using the bit lines. Depending upon the signals applied to the bit lines, a write operation may be performed for storing binary data in the RAM or a read operation may be performed for accessing binary data which is stored in the RAM. When read and write operations are not being performed, the RAM is typically placed in an idle operation for maintaining the binary data stored therein.
RAMs are typically divided into two general classes, depending upon the need to refresh the data stored in the RAM during the idle state. In particular, in a Dynamic Random Access Memory (DRAM), the data stored in the memory is lost unless the memory is periodically refreshed during the idle operation. In contrast, in a Static Random Access Memory (SRAM) there is no need to refresh the data during an idle operation, because the data stored therein is maintained as long as electrical power is supplied to the SRAM. In the present state of the art, it is generally possible to fabricate higher density DRAM arrays than SRAM arrays because the individual memory cells of a DRAM include fewer transistors than the individual cells of an SRAM. However, SRAMs tend to operate at higher speeds than DRAMs, because there is no need to refresh the data stored therein Accordingly, both SRAMs and DRAMs are typically used in computer systems, with the SRAMs being used for high speed memory (often referred to as "cache" memory), while the DRAM is typically used for lower speed, lower cost mass memory.
Three general design criteria govern the performance of random access memories. They are density, speed and power dissipation. Density describes the number of memory cells that can be formed on a given integrated circuit chip. In general, as more cells are fabricated on a Very Large Scale Integration (VLSI) chip, cost is reduced and speed is increased.
The performance of random access memories is also limited by the power consumption thereof. As power consumption increases, more sophisticated packaging is necessary to allow the integrated circuit to dissipate the high power. Moreover, high power circuits require expensive power supplies, and limit applicability to portable or battery powered devices.
Finally, speed is also an important consideration in the operation of a random access memory because the time it takes to reliably access data from the memory and write data into the memory is an important parameter in the overall system speed. It will be understood by those having skill in the art that the parameters of speed, density and power dissipation are generally interrelated, with improvements in one area generally requiring tradeoffs in one or more of the other areas.
In designing high density, high speed, low power random access memories, two general design areas may be pursued. The first is the design of the memory cell itself. For example, in a static random access memory, improved memory cell designs can permit high speed memory operations at low power consumption. One such improved design is described in copending application Ser. No. 07/619,101 entitled Static Random Access Memory (SRAM) Including Fermi Threshold Field Effect Transistors, by the present inventor Albert W. Vinal and assigned to the assignee of the present invention. A high density, high speed, low power SRAM cell is described.
A second major area in designing a high speed, high density, lower power random access memory is the design of the supporting circuits which allow reading of data into, writing of data from, and operational control of, the random access memory array. These circuits for reading, writing and controlling the operation of the RAM cell array are often critical limitations in the design of a high speed, high density, low power random access memory.
One particular criticality in the design of random access memory is the sense circuitry which is used to detect a binary ONE or binary ZERO from one or more cells in the random access memory during a read operation. Known sensing designs are slow, power hungry, and have consumed a disproportionate amount of chip "real estate" (area). In particular, a linear analog sense amplifier is typically used to amplify the signal from a selected cell in the memory in order to detect a binary ONE or binary ZERO, which is typically represented by a particular voltage level at the output of a selected cell.
In order to properly sense one of two voltage levels at the output of a particular cell, linear analog sense amplifiers typically require a reference or bias voltage, midway between the two voltage levels. See for example U.S. Pat. No. 4,914,634 to Akrout et al. entitled Reference Voltage Generator for CMOS Memories. Unfortunately, reference voltage generating circuits typically consume relatively large amounts of power on the integrated circuit and also take up critical chip area.
Linear analog sense amplifiers have also required equalization of the bit lines prior to sensing, in order to prevent an imbalance in the bit lines from producing false data values. See for example U.S. Pat. No. 4,893,278 to Ito entitled Semiconductor Memory Device Including Precharge/Equalization Circuitry For The Complementary Data Lines. Unfortunately, the need for equalization adds to the complexity of the circuitry on the memory. Equalization also generally requires balanced transistors in the entire memory, thereby requiring tighter transistor tolerances and lowering the yield of the integrated circuit devices.
High gain, high speed linear sense amplifiers have reduced tolerance for imbalance, thereby decreasing the number of cells that can be coupled to the sense amplifier and further limiting the density of the memory array. The linear sense amplifier also limits the speed of the memory because linear sense amplifiers are limited by a given gain-bandwidth product, so that the higher the gain required, the slower the speed of the linear sense amplifier and vice versa.
Since linear sense amplifiers consume high power, many memory designs deactivate the sense amplifiers when a read operation is not being performed. Unfortunately, deactivation reduces the speed of the memory device because the sense amplifiers must be reactivated prior to a read operation.
Finally, at some point during the linear amplification of a read signal, the linearly amplified signal must be nonlinearly converted into a binary ONE or ZERO. Accordingly, the output of a sense amplifier is typically coupled to a latch, to thereby produce one or the other binary state. See for example U.S. Pat. No. 4,843,264 to Galbraith entitled Dynamic Sense Amplifier For CMOS Static RAM, and U.S. Pat. No. 4,831,287 to Golab entitled Latching Sense Amplifier. Unfortunately, sense amplifiers which include a combination of a linear analog sense amplifier and a nonlinear latch are complicated and are difficult to accurately control for high speed operation.