A semiconductor memory unit is a collection of storage cells together with associated circuits needed to transfer information (data) in and out of the device. Two basic types of semiconductor memories are nonvolatile, of which a ROM (read-only memory) is typical, and volatile, of which a RAM (random access memory) is typical.
In ROM, data is permanently or semi-permanently stored and can be read at any time. In a ROM in which the data are permanently stored, data is either manufactured into the device or programmed into the device and cannot be altered. In a ROM in which the data are semi-permanently stored, the data can be altered by special methods, such as by exposure to ultraviolet light or by electrical means. ROM write operations require special methods.
RAM is memory that has both read and write capabilities. RAM circuits generally come in two forms. The first form of RAM is known as a static RAM circuit (xe2x80x9cSRAMxe2x80x9d). A primary characteristic of an SRAM circuit is that the circuit has latches in which data may be indefinitely retained, provided power is connected to the circuit. The second form of RAM is known as a dynamic RAM circuit (xe2x80x9cDRAMxe2x80x9d). A primary characteristic of a DRAM circuit is that the circuit uses charge storing elements, such as capacitors, to retain the stored data in the storage locations, and the circuit must periodically refresh its data to retain it.
A conventional computer or processor has internal (or main) RAM. The computer can manipulate data only when it is in the main memory. Every program executed and file accessed must be copied from a storage device into main memory. After program or file data manipulation or utilization is complete, the RAM bits that comprise that data may be erased or overwritten by another program or file. Thus, the amount of main memory on a computer is important, as it determines how many programs can be executed at one time and how much data can be readily available to a program.
One restraint on computer memory (ROM or RAM) capacity is the physical dimensions of a disk or chip. RAM capacity is limited also by power, heat, and manufacturing limitation constraints. Because a single chip may store millions of bits of data, simplification of chip circuitry for processing bits in and out of ROM and RAM is highly desired.
The communication between a memory and its environment is achieved through data input and/or output lines, address selection lines, and control lines that specify the direction of transfer. In a conventional memory circuit, data is stored in a plurality of storage locations arranged as an array (or a group of sub-arrays) of memory cells. Each storage location is identified by an address, which might include both a row identifier and a column identifier. In conventional memory circuits, internal data lines transfer the data to the storage locations during a write cycle and transfer the data from the storage locations during a read cycle.
A simplified overview of a prior art read cycle will now be described. Three generalized components of a prior art read cycle are represented in FIG. 1. Memory cell 10 is one of the thousands or millions of storage locations within a memory 12. While each storage location may accommodate one or more bits, to simplify the present discussion, it will be assumed that memory cell 10 has only one bit. For purposes of this discussion, it may be assumed that the proper addressing and control signals have been activated for accessing the contents of memory cell 10.
As is well known by those skilled in the art, bit data processing must occur within predetermined timing specifications. The rate of bit processing not only affects the overall speed of the processor, but bits sequentially occupy the same processing components and lines. Thus, it is desirable to have fast bit data processing speeds. Typically, however, the magnitude of the charge stored for representing a bit in memory storage is too low to quickly drive output circuits. Consequently, circuitry has been incorporated into memory chips to increase the speed of data read cycles. To ameliorate the aforementioned processing speed and power constraints, read processing circuitry 14 has been incorporated in memory chips for processing bit data to external circuitry 18. Generally, such circuitry has been devised for quickly detecting the status of the bit, i.e., xe2x80x9c0xe2x80x9d or xe2x80x9c1xe2x80x9d, and for responsively providing a bit status data signal that can quickly and accurately be detected by the external circuitry.
Prior art read processing circuitry 14 has included transposing the bit data as represented in the memory cell bank to a format that is more suitable for processing. One such format represents bit data (0 or 1) on dual data lines, A and B, as follows:
In this example, the signals on lines A and B are processed in parallel from a data line to a latch. The latch receives the signals on lines A and B at latch inputs and responsively provides output signals on output lines A and B. The signals on the output lines are preferably driven HIGH by the system power source and driven LOW by system ground, thus providing relatively strong output signals to the external circuitry.
In the dual data line embodiment discussed above, it has long been known in the art that there are advantages to xe2x80x9cequalizingxe2x80x9d the data lines and latch nodes using data line equalization circuitry and latch node equalization circuitry. Equalization ensures that data lines begin at the same potential, thereby preconditioning the lines for the application of opposite (e.g., high or low) bit representation voltages. Thus, received data bit signals will be detected quickly and accurately. It has been recognized in the prior art that these and other advantages are realized by equalizing the data latch input nodes, which receive on the xe2x80x9cAxe2x80x9d line and the xe2x80x9cBxe2x80x9d line high and low data bit signals and responsively provide HIGH and LOW output signals.
In the prior art, the data latch nodes and the data lines are equilibrated by pre-charging both the latch nodes and the data lines to the same voltage magnitude. Typically, the latch nodes and data lines are both temporarily connected to a voltage source, such as the chip power supply. In this example, the data lines and latch nodes are both charged to VCC and then isolated from the chip power supply. The equilibrated data lines (xe2x80x9cAxe2x80x9d and xe2x80x9cBxe2x80x9d) receive the bit data signals, which are thereafter (in accord with processor timing specifications) provided to the equilibrated data latch nodes. Such a pre-charge and latching process may be characterized as a 3-phase latch, as discussed below.
An example of a 3-phase read data latch system 20 is shown in FIG. 2. The read data latch system 20 shown functions under the control of control lines 24, 54, 64, and 66. Transistors 26, 28, and 34 function as switches for controlling the pre-charge and equalization of data bit input lines 22A and 22B. These switch transistors operate under the control of data line control line 24.
(Phase I) Initially, data line control line 24 is HIGH, control line 66 is also HIGH and control line 54 is LOW. Meanwhile, control line 64 remains LOW. In this state, data lines 22A and 22B are isolated from one another and from the data latch power source 60. Data line 22A and latch node 62A are in direct electrical communication via switch 56, and data line 22B and latch node 62B are in direct electrical communication via switch 58. Data latch nodes 62A and 62B are isolated from one another. Thus, in this state, the data bit signals provided on lines 22A and 22B will establish a differential signal on the nodes in latch 42.
(Phase II) Next, control line 64 is set HIGH so that latch nodes 62A and 62B may be driven by ground and latch power source 60, in accord with the differential data bit signals received from data lines 22A and 22B. At the same time, control line 54 is set HIGH, to isolate the data lines from the latch nodes, and control line 24 is set LOW. In this state, the data lines 22A and 22B are pre-charged by power source 60 and equalized through switch 30, and the latch 20 outputs a data bit signal on latch nodes 62A and 62B, the data bit signals being driven by the power source 60 and ground.
(Phase III) Next, control line 66 is set LOW and control line 64 is set LOW. In this state, the data latch nodes 62A and 62B are equalized to the HIGH voltage level in preparation for receiving a new differential signal when returning to Phase I.
In the above-described latch, each phase requires an execution time so that the switches may be set as indicated above and the nodes and lines may be driven to their respective voltage levels. The total time for a data read cycle is dependent upon and limited by the number of phases required by the latch design. A 3-phase latch, thus, inherently limits the clock speed of a data read cycle. Therefore, it is desired to overcome the clock speed limitations of a 3-phase data read latch.
The present invention relates to read path circuitry for memory integrated circuits and particularly to read data latch circuitry optimized for use in high speed memory integrated circuits.
The present invention can be used with any circuit that uses a latch to capture data on an internal bus. The invention allows the use of only two clock edges to perform the entire latch and precharge cycle. The present invention captures the small differential voltage on the internal bus and amplifies it. The result is a reduced cycle time, which provides for higher speed operation.
In the data read circuit disclosed herein, data latch nodes are equilibrated, but not through a direct connection to a power source. Rather, each latch node is equilibrated by sharing the charge of its respective pre-charged data line. Specifically, the data lines, while isolated from the latch nodes, are equilibrated to VCC. Prior to the application of bit data on the data lines, a switch is activated so that each latch node is electrically connected to its respective data line. The capacitance of each latch node, which is much smaller relative to its respective data line capacitance, provides a charge sharing scheme through which the latch nodes are equilibrated to VCC.