1. FIELD OF THE INVENTION
The invention relates to the design of random access memory in electronic computers, and in particular, to random access memory allowing read-only or read/write accesses applications.
2. DESCRIPTION OF THE PRIOR ART
Designers of memory systems face the trade-off between maximizing bandwidth and minimizing pin count in the package. In general, high bandwidth can be achieved by increasing the bit-width of the memory at the expense of larger pin count in the package. The resulting package not only is more expensive, but requires larger space on the circuit board.
From the technical viewpoint, especially in mainframe and supercomputer applications, huge memory systems are essential in attaining high performance. In order to achieve high bandwidth in these systems, pipelined or interleaved architecture is typically deployed. As a result, these systems are implemented by large number of chips, often in the tens of thousands. Space, degradation of reliability, and power consumption are significant concerns. Hence, a memory system achieving higher bandwidth without increasing pin/package count is extremely valuable.
FIG. 4 illustrates a conventional scheme, the input control signals are RAS (row address strobe), CAS (column address strobe), WE (write enable), and OE (output enable). The output signals are represented by a 4-bit bus designated I/O.sub.0-3. In this organization, the memory is represented by the memory array 32 which is a 256.times.256.times.4 array.
During the read cycle, the address is presented to the address buffer 26 in two installments, at the address line A.sub.0-7. When the RAS signal goes from high to low, the first 8 bits, representing the row address, are selected by the multiplexer 27 to be presented at the row address decoder 28. Within the same cycle, after a predetermined hold time, the column address is then presented in 8 bits to the address buffer 26. When the CAS signal goes low, the column address is then latched into the column address decoder 30. After some suitable delay, the content of the memory location addressed is available at the data IO bus 29. The data IO bus 29 feeds into the data-out buffer 34. When the OE signal goes from high to low, it triggers the OE clock generator 22 to trigger a latch signal to the data-out buffer 34, in order to latch in the content of the data IO bus 29. The data is then available at the I/O.sub.0-3 bus for external use.
During the write cycle, the address is similarly made available in two installments. The RAS signal, as before, will latch in the row address and the CAS signal will latch in the column address. The data to be written is required at the data-in buffer 33. When the WE signal goes low, the data-in buffer is latched into the data IO bus 29 which is in turn, latched into the memory array 32 for storage.
The conventional scheme shown in FIG. 4 assumes the use of dynamic random access memory (DRAM) components, so that an internal refresh clock 23 and an internal refresh address counter 24 are necessary to maintain the memory content. Refresh clock 23 and internal refresh address counter 24 are not necessary in organization involving static random access memory (SRAM). However, the signaling scheme in both DRAMs and SRAMs, as well as in other analogous technologies, are substantially the same.
In the conventional scheme, the bandwidth of data input/output per cycle is the same as the bit-width of the external I/O bus. As noted above, because pin count is directly related to production and packaging cost, the limitation of bandwidth by the width of the external bus weighs heavily in the cost of production. Relaxation of this limitation, i.e. increasing bandwidth without increasing the pin count, effectively increases production and packaging efficiency. However, such improvement must not come at the expense of space or power consumption because circuit density, as these factors relate to, is also a very important cost factor, as well as limitations upon actual applications (e.g. chips for portable devices are required to be both small in size as well as low in power consumption, since the device must be lightweight, and must operate with low-output power supplies).
Also, any improvement over the conventional scheme must not be restricted to a particular technology, or be required to be implemented only with certain organization or paging schemes. Memory design is seldom an end in itself; therefore, a successful memory chip must be amenable to use in any memory organization, and be compatible with signals from a wide variety of CPU or peripheral devices.