1. Field of the Invention
The invention relates to a method and apparatus for a memory device such as a Dynamic Random Access Memory (DRAM), and more particularly to a memory device including a common data bus for rapidly manipulating packets in order to store and retrieve the packets in an efficient manner so as to prevent the system from incurring penalties, such as bus turnaround penalties, small packet penalties, and back-to-back write penalties.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with higher bandwidth capabilities to enable high speed transfer of significant amounts of data. Thus, today's main memory systems require exponentially increasing bandwidth in the order of multiple gigabytes per second transfer rate to keep up with the rising processor frequencies and demanding user applications in the area of networking. Historically, these memory systems used commodity DRAMs in wide data paths to accomplish bandwidth requirements. To achieve even greater bandwidth, many methods have been proposed, such as reducing memory read/write turnaround-time, row address strobe (RAS) and column address strobe (CAS) access time, and bank-to-bank conflict.
In a conventional DRAM, the memory system may include a cell array which contains an array of memory banks. An individual memory bank may consist of a transistor which causes a tiny capacitor to be placed in either a charged (i.e., “1”) or discharged (i.e., “0”) state. Thereby, a single memory bank may be capable of being programmed to store one bit of information. The memory banks may be arranged in rows and columns. Address lines may be used to specify a particular memory bank for access. These address lines may be multiplexed to provide a bit address by using a row address strobe (RAS) signal and a column address strobe (CAS) signal. The RAS signal may be used to clock addresses to a row address register. A row address decoder decodes the address and specify which rows are available for access. Similarly, the CAS signal may be used to clock addresses to the column address register. The column address decoder decodes the address and specify which columns are available for access. Once a particular cell is specified by decoding its row and column, a read/write (R/W) signal is used to specify whether a bit is to be written into that cell, or the bit retained by that cell is to be read out of the memory bank.
With the advent of the Rambus™ DRAM (RDRAM) strides have been made by developing a customized high-bandwidth, low pin count memory interface. In an RDRAM system, the remapping of the addresses is achieved by swapping predetermined bits of the memory address. The swapping of bits has the effect of assigning neighboring rows in an array to different row latches.
Although the RDRAM scheme has greatly reduced the rate of memory bank contentions, a significant drawback of the RDRAM scheme still exists in the manner in which the packets are retrieved from the memory device. When each incoming packet is received, a pointer from a link list is assigned to each packet based upon a first-in first-out (FIFO) scheme. The pointer serves to point to the location where the packet is stored in the memory bank. The RDRAM is a protocol that merely manipulates the storing of the packets by remapping the packets stored in the memory banks. However, the RDRAM does not control the sequence in which the packets are retrieved from the memory banks. Thus, when a readout request is received, the packets may be transferred from the memory device and dequeued to the switch fabric based on many different read request schemes such as a first in first order (FIFO) scheme, a priority request, a weighted round robin request, or an output queue congestion conflict scheme. If the read request is based upon any other request scheme except the FIFO scheme, it is very unlikely that all of the packets will be read out consistently according to the first in first out order in which the packets were stored in the memory banks. This means that after the packets have been accumulated in the packet memory and switched around for a while, the addresses of the adjacent dequeued packets may no longer be located near each other. Consequently, when the packets are read out of the memory banks in a sequence not according to the FIFO scheme, the pointers of the dequeued packets are returned back to the link list in a non-sequential order. Namely, the pointers will re-join the link list in a random order. However, when a write request is received to assign a new incoming packet to the memory banks, the address pointers are then obtained according to availability from the link list and not according to the original sequential order. When the pointers are dequeued from the memory cells and freed back to the link list in a random order so that the address values are not successive, the address remapping scheme in RDRAM no longer works, and it may produce stall cycles for the network due to writing to the same memory device or the same bank base on available pointers in the link list which is now in random order.
In sum, the RDRAM address swapping scheme may be helpful for adjacent addresses that are stored and received according to a FIFO scheme. However, the RDRAM re-mapping scheme suffers considerable drawbacks when non-FIFO reading schemes are used so that the address values are not successive, but random, and the pointers are no longer arranged successively.