1. Field of the Invention
The invention relates to computer memory systems and, more particularly, to a dual path memory retrieval system for an interleaved dynamic RAM memory unit.
2. Description of Related Art
Data processing systems for computers use memory to store information. More specifically, the data processor of a computer will store individual units of information consisting of a specific number of bits representing binary digits positioned at specific locations within a memory unit. The locations within the memory where data bits are stored, are themselves specified by addresses. Each address consists of a specific number of bits and the total number of bits available for address information defines the total number of memory locations that can be addressed within the computer. The total number of addressable memory locations, in turn, provides a limit to the amount of information that can be stored and accessed by the data processor. This limitation in memory limits the capability of the computer system in performing its data processing functions.
Depending on their access characteristics, computer memory structures may be categorized in either one of two types of memory configurations. One type of memory unit is referred to as the read only memory (or "ROM") type of memory. In general, ROMs are characterized by the permanent storage of memory at selected locations. A random access memory (or "RAM"), on the other hand, is generally characterized by the ability to both write information into and read information out of the memory at any location and in any desired sequence.
A typical RAM consists of a plurality of memory cells, an address decoder, read/write control circuitry and a memory output register. While there are many variations in the structure of and interconnection between the basic elements of RAMs which are utilized to separate different RAM designs into numerous classifications, RAMs may be separated into two distinct types based on the structure of the memory cells used in the memory unit--the "static" RAM (or "SRAM") and the "dynamic" RAM (or "DRAM"). In the SRAM, each memory cell consists of a flip-flop circuit comprised of four or six transistors in which each memory cell has two stable states. As long as power is supplied to the memory cells, the information stored in the cells will be maintained.
In contrast, each memory cell of a DRAM includes a microscopic "storage" capacitor consisting of two conductive layers separated by an insulator. The memory cell of a DRAM stores a single bit of information in the microscopic capacitor as the presence or absence of an electrical charge in that capacitor. A charged capacitor generally represents a "1" and a discharged capacitor generally represents a "0". Usually, a single transistor is used to control the charging of the storage capacitor. Since the electric charge stored in the storage capacitor of a memory cell will gradually leak away, the stored information must be periodically rewritten into the cell before the charge completely leaks out. This periodic rewriting of the information previously stored in the memory cell is called "refreshing" the memory. The frequency at which a memory cell must be refreshed varies depending on the rate of leakage in the control transistor. In a typical DRAM, each memory cell must be refreshed every eight milliseconds or less. Although the refreshing operation requires additional circuitry to coordinate the procedure, the DRAM is often used due to certain advantages over the SRAM. For example, because the DRAM requires only a single control transistor while the SRAM requires a pair of flip-flop transistors, the DRAM occupies a much smaller area on the silicon substrate than the SRAM and is less expensive to manufacture.
In a memory unit which is comprised of DRAMs, both memory access (i.e. writing to or reading from a memory cell) and refresh operations are controlled by a combination of a pair of signals called a row address strobe (or "RAS") signal and a column address strobe (or "CAS") signal, respectively. During a memory access operation, the RAS and CAS signals are used to select the particular memory cell to be accessed. Some DRAMs also require manipulation of both the RAS and CAS signals to perform a refresh cycle. Other DRAMs may be refreshed by activating only the RAS signal. In order to select a desired one of the DRAM storage cells for the reading of data therefrom or the writing of data thereto, a plurality of address inputs are provided to the DRAM. In operation, the high order address bits are first applied to their respective address inputs followed by the assertion of the RAS signal. Assertion of the RAS signal causes the row address of the storage array to be latched within the DRAM. The low order address inputs are next applied, followed by the assertion of the CAS signal to latch the column address of the storage array. The particular combination of the row and column addresses is then decoded by row and column address decoder circuitry to select one of the binary storage cells for reading information therefrom or writing information thereto.
Numerous techniques have been developed to increase the speed at which a computer system operates. In particular, techniques for increasing the speed at which a computer system is capable of writing data to or reading data from its memory unit have been the subject of continuing development. One such technique has been to handle the data to be processed in larger units so that more bits of data are moved through the computer system per unit of time. For example, the 80386 and 80486 microprocessors manufactured by the Intel Corporation of Santa Clara, Calif., utilize 32 bit, double word architectures to handle data faster than the prior art processors which used 16 bit words. Similarly, storing and handling data in system memory in 64 bit units, i.e., four contiguous words of 16 bits each or two contiguous double words of 32 bits each, also enables faster data access. However, because of both connector pin limitations and the fact that current CPUs process data with 32 bit double words, it has been necessary to transmit and handle data in 32 bit units even though 64 bit wide memories can be implemented by interleaving two 32 bit memory banks.
In an interleaved memory system, the data bits which comprise a data block are distributed to specified locations in a series of memory banks for storage. To read the data block, the specified location in each memory bank is accessed and the accessed data bits are multiplexed together to reassemble the data block. Interleaved memory systems are often used to enable the faster handling of data in a computer system. For example, the system of the present invention stores data in 64 bit blocks. However, because the processor and error correction code circuitry of the present system only handles 32 bit double words, interleaving is used to handle the entry and retrieval of each pair of 32 bit double words comprising a 64 bit block (actually each 32 bit data word comprises 39 bits, since with ECC each word also includes 7 syndrome bits, so that a total 78 bit block is formed).
An interleaved memory system distributes the components of a data block among a series of memory banks. To process the data block, therefore, the distributed components of the data block must be reassembled. Thus, while a particular address in a series of memory banks may be simultaneously read by asserting a single CAS signal, if the series of memory banks are interleaved with each other, the outputs of the memory banks must be processed serially, one after the other. As a result, a next address cannot be accessed until the previous read operation is completed and the rate at which data may be retrieved from the memory unit is significantly reduced.
The use of latching techniques can avoid such a result. A latch is a logic device where the output will follow the input upon the receipt of a separate clock pulse. If the output of each memory bank is input into a latch, a next memory access may be commenced while the latches continue to transfer data from the first memory access. However, the use of latching techniques generally introduce additional time delays into a data read, most notably in the time required to read the first data component, because of the time delay inherent in latching the latch output to its input.