The present invention relates to late write architectures for memory devices wherein write data is received by a memory device sometime after a corresponding write address has been presented thereto and, in particular, those architectures wherein write data is stored in a buffer in the memory device and written to a memory Array thereof at a later time.
So-called late write memory architectures define industry standard methodologies for allowing a read operation to be initiated prior to the completion of a prior write operation. This feature increases data throughput in a memory by reducing latency between write and read operations. FIG. 1a illustrates a conventional pipelined read operation. As shown, because of delays involved with reading data from the memory device, the read data (RD1) associated with a read address (RA1) is not available until sometime after the read address has been presented to the memory device (e.g., two clock cycles after the read address for the illustrated example). Conversely, for the conventional write operation shown in FIG. 1b, the write data (WD1) associated with a write address (WA1) is available in same clock cycle as the write address. This timing difference between read and write operations leads to latencies where read and write operations are executed back-to-back. For example, as shown in FIG. 1c, for a sequence of write-read-write operations, because of the latency associated with the read operation the address (WA2) associated with the second write operation must be delayed two clock cycles from the read address (RA1), to allow the read operation to complete. Such latencies lead to overall slower operations for a system involving conventional memory devices.
Various schemes have been introduced to avoid the latency problems experienced with conventional memory devices. For example, burst memory architectures seek to execute a number of read or write operations back-to-back and thereby avoid latencies. Also, pipelining operations allow addresses associated with future operations to be presented on a bus before the associated data for the operation. For example, late write architectures, as shown in FIGS. 2a and 2b, allow write operations to resemble read operations in that the write data (WDX)associated with a write address (WAX) is presented on a data bus sometime after the write address is presented on an address bus. For the illustrated example, write data is available two clock cycles after a corresponding write address, just as read data (RDX) is available two clock cycles after a corresponding read address (RAX) is first presented. FIGS. 2a and 2b show that for any combination of reads and writes, there are no latencies experienced when the operations are executed back-to-back. What is desired, therefore, is a memory architecture which can support such late write schemes.
In one embodiment, the present invention provides a memory device which includes an address pipeline configured to receive a write address at a first time and to provide the write address to a memory array at a second time corresponding to a time when write data associated with the write address is available to be written to the array. The address pipeline may include a series of registers arranged to receive the write address and to provide the write address to the memory array. In addition, the memory device may include a comparator as part of the address pipeline. The comparator is configured to compare the write address to another address (e.g., a read address) received at the memory device. The address pipeline may also include a bypass path to the array for read addresses received at the memory device.
The memory device may further include a data pipeline configured to receive data destined for the memory device and to provide the data to the memory array. The data pipeline may include a data bypass path which does not include the memory array.
In another embodiment, the memory device address pipeline includes a first register configured to receive the write address and a parallel address path coupled between the first register and the memory array. One of the pair of the parallel address paths may include at least a second register and, in one particular embodiment, includes a pair of registers. In alternative embodiments, the address pipeline of the memory device may include a pair of parallel address paths to the memory array, the pair of parallel address paths sharing at least one register.
In a further embodiment, the present invention provides a method which includes pipelining a write address within a memory device such that the write address is provided to a memory array at a time when write data corresponding to that address is to be written to the array. For one particular embodiment, that time may be two clock cycles after the write address is first received at the memory device. In general, pipelining the write address includes passing the write address through a series of register stages of the memory device. Such passing may be controlled according to instructions received at the memory device after the write address has been received. The write address may be passed through at least two register stages during the pipelining. This embodiment may further include comparing the write address to read addresses received at the memory device.
These and other features and advantages of the present invention will be apparent from the detailed description and its accompanying drawings which follow.