Random access memory (RAM) is a component used within electronic systems to store data for use by other components within the system. Dynamic RAM (DRAM) is a type of RAM which uses a capacitor-type storage and requires periodic refreshing in order to maintain the data stored within the DRAM. Static RAM (SRAM) is another type of RAM which retains the information stored within the SRAM as long as power is applied. SRAM does not require periodic refreshing in order to maintain the stored data.
RAM is generally organized within the system into addressable blocks, each containing a predetermined number of memory cells. Each memory cell within a RAM represents a bit of information. The memory cells are organized into rows and columns. Each row of memory cells forms a word. Each memory cell within a row is coupled to the same wordline which is used to activate the memory cells within the row. The memory cells within each column of a block of memory are also each coupled to a pair of bitlines. These bitlines are also coupled to local input/output (LIO) lines. These local input/output lines are used to read data from an activated memory array or write data to an activated memory array. The pair of bitlines includes a bitline and an inverse bitline. A memory cell is therefore accessed by activating the appropriate wordline and pair of bitlines.
In a typical memory subsystem utilizing synchronous SRAM, data is transferred to and from each SRAM via one or more data buses. A data bus is a set of one or more data transmission lines or pins (depending on the context), and may be uni-directional or bi-directional. If uni-directional data buses are used to transfer data in a memory subsystem, then each SRAM must have at least two distinct data buses, one to receive input data during write operations and one to send output data during read operations. If bi-directional data buses are used to transfer data in a memory subsystem, then each SRAM needs only one distinct data bus which will both receive input data during write operations and send output data during read operations.
As used herein, the term data bus configuration refers to the number of distinct buses and the type of data buses used by the memory, to transfer data during read and write operations. The type characteristic of the data bus refers to whether the data bus is uni-directional or bi-directional. The two data bus configurations commonly implemented in synchronous SRAMs are common I/O and separate I/O. The common I/O data bus configuration includes one bi-directional data bus used to transfer data for both read and write operations. The separate I/O data bus configuration includes two uni-directional data buses, one used to transfer data for read operations and one used to transfer data for write operations.
As discussed herein, the read protocol of an SRAM refers to the timing of the first piece of output data driven from the SRAM during a read operation. The two read protocols commonly implemented in synchronous SRAMs are register-flow through (R-FT) and register-register (R—R). Using the register-flow through protocol, the first output data is driven from the same rising edge of the input clock that latches the address and control information. The register-flow through protocol is also sometimes referred to simply as flow through protocol. Using the register-register protocol, the first output data is driven from the rising edge of the input clock, one cycle after the address and control information are latched. The register-register protocol is also referred to as a pipelined protocol.
As discussed herein the write protocol of an SRAM refers to the timing of the first piece of input data driven to the SRAM during a write operation. The three write protocols commonly implemented in synchronous SRAMs are early write (EW), late write (LW) and double late write (DLW). Using the early write protocol, the first input data is latched on the same rising edge of the input clock that latches the address and control information. Using the late write protocol, the first input data is latched on the rising edge of the input clock, one cycle after the address and control information are latched. Using the double late write protocol, the first input data is latched on the rising edge of the input clock, two cycles after the address and control information are latched.
As discussed herein, the burst protocol of an SRAM refers to how much data is transferred to and from the SRAM per read and write operation, and how many clock cycles it takes to transfer that data. The four burst protocols commonly implemented in synchronous SRAMs are single data rate burst of one (SDR-B1), single data rate burst of two (SDR-B2), double data rate burst of two (DDR-B2) and double data rate burst of four (DDR-B4). Using the single data rate burst of one protocol, one piece of data is transferred per read and write operation, in one clock cycle. Using this protocol, data is latched on by the SRAM, during write operations, and driven from the SRAM, during read operations, only on the rising edge of the input clock. Using the single data rate burst of two protocol, two pieces of data are transferred per read and write operation, in two clock cycles. Using this protocol, data is latched on by the SRAM, during write operations, and driven from the SRAM, during read operations, only on the rising edge of the input clock. Using the double data rate burst of two protocol, two pieces of data are transferred per read and write operation, in one clock cycle. Using this protocol, data is latched on by the SRAM, during write operations, and driven from the SRAM, during read operations on both the rising and falling edges of the input clock. Using the double data rate burst of four protocol, four pieces of data are transferred per read and write operation, in two clock cycles. Using this protocol, data is latched on by the SRAM, during write operations, and driven from the SRAM, during read operations, on both the rising and falling edges of the input clock.
As discussed herein, the operation protocol of an SRAM refers to the combination of read, write and burst protocols implemented in the SRAM. It should be noted that the register-flow through protocol is not typically implemented in high speed synchronous SRAMs because the flow-through access time of an SRAM severely limits the cycle time of the SRAM.
Any combination of read, write and burst protocols can be used with a common I/O data bus configuration. The four operation protocols commonly implemented in high-speed common I/O synchronous SRAMs are: 1) register-register, early write, single data rate burst of one; 2) register-register, late write, single data rate burst of one; 3) register-register, double late write, single data rate burst of one; and 4) register-register, late write, double data rate burst of two.
Any combination of read, write and burst protocols can be used with a separate I/O data bus configuration. The six operation protocols commonly implemented in high speed separate I/O synchronous SRAMs are: 1) register-register, early write, single data rate burst of one; 2) register-register, late write, single data rate burst of one; 3) register-register, late write, double data rate burst of two; 4) register-register, early write, single data rate burst of two; 5) register-register, late write, single data rate burst of two; and 6) register-register, late write, double data rate burst of four.
As discussed herein, an operation sequence refers to any sequence of read, write and deselect operations in which no superfluous deselector operations are included. That is, deselect operations are only included in the sequence if they are required by the SRAM due to its data bus configuration and/or operation protocol.
As discussed herein the data transfer efficiency (DTE) of an SRAM refers to how efficiently data is transferred to and from the SRAM during a given operation sequence. The data transfer efficiency value corresponds directly to data bus utilization percentage. If data is transferred on all of the SRAM's data buses every clock cycle during a given operation sequence, then the data transfer efficiency value of the SRAM for that particular sequence is equal to 100%. If there are dead cycles on the data buses, meaning that data is not transferred on all of the SRAM's data buses every clock cycle during a given operation sequence, then the data transfer efficiency value of the SRAM for that particular sequence is less than 100%. The more dead cycles on the data buses during a given operation sequence, the lower the data transfer efficiency value of the SRAM for the particular sequence.
For a given operation sequence, the data transfer efficiency of a synchronous SRAM can be calculated using the following equation:DTE=(N2*(R+W))/(N1*(R+W+D))  (1)In the above equation, the value R represents the number of read operations in the sequence, the value W represents the number of write operations in the sequence, the value D represents the number of deselect operations in the sequence, the value N1 represents the number of data buses implemented in the SRAM and the value N2 represents the number of clock cycles needed to transfer all of the data associated with each read and write operation. For a synchronous SRAM implementing the common I/O protocol, the values N1 and N2 are both equal to one, allowing the SRAM to have a maximum data transfer efficiency of 100%. For a synchronous SRAM implementing the separate I/O protocol with the single data rate burst of two or double data rate burst of four protocols, the values N1 and N2 are both equal to two, allowing the SRAM to have a maximum data transfer efficiency of 100%. For a synchronous SRAM implementing the separate I/O protocol with the single data rate burst of one or double data rate burst of two protocols the value N1 is equal to two and the value N2 is equal to one, allowing the SRAM to have a maximum data transfer efficiency of 50%.
It is assumed herein that the operation frequency of the memory subsystem is very fast, such that, if a bi-directional data bus is utilized to transfer data to and from the SRAM, it cannot change from transferring input data to the SRAM to transferring output data from the SRAM, or vice versa, reliably in the same clock cycle. The operation frequency of the memory subsystem is equal to the frequency of the input clock to the SRAM. It is further assumed herein that at least one dead cycle must be inserted on a bi-directional data bus during transitions between read and write operations such that no data is transferred on the data bus during the clock cycle in which the data bus changes state from output to input or input to output. No such restriction is assumed for a uni-directional data bus, since a uni-directional data bus, by definition, never changes state from input to output or output to input.
A typical interface between an SRAM controller and a common I/O synchronous SRAM is illustrated in FIG. 1. The SRAM controller 10 is coupled to the SRAM 20 to provide a clock input on the clock input signal line 12, to provide address inputs on multiple address input signal lines 14 and to provide operation control inputs on multiple control input signal lines 16. The SRAM controller 10 is also coupled to the SRAM 20 by multiple data input/output signal lines 18 forming the bi-directional data bus. The control input signal lines 16 include two signal lines to specify a current operation, including read, write and deselect operations.
As discussed above, because common I/O synchronous SRAMs use bi-directional data buses to transfer data, they require at least one dead cycle to be inserted on their data buses between the transitions between read and write operations and the transitions between write and read operations. Collectively, the transitions between read, write and deselect operations are referred to herein as operation transitions. A dead cycle is typically inserted on a bi-directional data bus synchronously, using a deselect operation. The number of deselect operations needed to insert one dead cycle on the data bus between operation transitions varies depending on the operation protocol of the SRAM.
The data transfer efficiency of common I/O synchronous SRAMs varies depending on the operation sequence. The more operation transitions in a given operation sequence, the more deselect operations needed in the sequence. As a consequence of having more deselect operations in the sequence, the data transfer efficiency of the SRAM is lower for that particular sequence.
The deselect requirements and data transfer efficiency for all four common I/O operation protocols are described below in reference to the exemplary operation sequence: READ. WRITE, READ, READ, WRITE. WRITE, WRITE. Timing diagrams of this sequence for each common I/O operation protocol are illustrated in FIGS. 2–5 and will be discussed below. In these examples, the data transfer efficiency is calculated based on a repeating sequence of these seven operations, and is determined by the number of deselect operations that must be inserted in the sequence. Each of the timing diagrams included in the examples of FIGS. 2–5 include the same clock signal and the same address and data information. The only difference between the examples of FIGS. 2–5, is the number of deselect operations required between operation transitions, which changes the timing of the occurrence of the address and data information. When calculating the data transfer efficiency of the different protocols for this exemplary operation sequence, the only number that varies between the different operation protocols is the number of deselect operations within the sequence. As described above, for common I/O protocols, the number of data buses implemented (N1) and the number of clock cycles needed to transfer all of the data associated with each read and write operation (N2) are both equal to one. In this exemplary sequence the number of read operations (R) is equal to three and the number of write operations (W) is equal to four.
A timing diagram of the register-register, early write, single data rate burst of one, common I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 2. Using this operation protocol, three deselect operations are required between each read to write operation transition to insert a dead cycle on the data bus. Using this operation protocol, it is not necessary to include any deselect operations between write to read operation transitions. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to six, which is three for each read to write operation transition. Accordingly the data transfer efficiency for this protocol is equal to 53.8%, for this example, as calculated in equation 2 below:DTE=(1*7)/(1*13)=53.8%  (2)
A timing diagram of the register-register, late write, single data rate burst of one, common I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 3. Using this operation protocol, two deselect operations are required between each read to write operation transition to insert a dead cycle on the data bus. Using this operation protocol, it is not necessary to include any deselect operations between write to read operation transitions. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to four, which is two for each read to write operation transition. Accordingly, the data transfer efficiency for this protocol is equal to 63.4%, for this example, as calculated in equation 3 below:DTE=(1*7)/(1*11)=63.4%  (3)
A timing diagram of the register-register, double late write, single data rate burst of one, common I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 4. Using this operation protocol, one deselect operation is required between each read to write operation transition and between each write to read operation transition to insert a dead cycle on the data bus. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to four, which is one for each operation transition. Accordingly, the data transfer efficiency for this protocol is equal to 63.4%, for this example, as calculated in equation 4 below.DTE=(1*7)/(1*11)=63.4%  (4)
A timing diagram of the register-register, late write, double data rate burst of two, common I/O operation protocol, applied to the exemplary sequence, is illustrated in FIG. 5. Using this operation protocol, two deselect operations are required between each read to write operation transition to insert a dead cycle on the data bus. Using this operation protocol, it is not necessary to include any deselect operations between write to read operation transitions. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to four, which is two for each read to write operation transition. Accordingly, the data transfer efficiency for this protocol is equal to 63.4%, for this example, as calculated in equation 5 below:DTE=(1*7)/(1*11)=63.4%  (5)
As illustrated and discussed above, the data transfer efficiency of the common I/O operation protocols, illustrated in FIGS. 3–5, at 63.4%, is greater than the data transfer efficiency of the common I/O operation protocol, illustrated in FIG. 2, at 53.8%. The common I/O operation protocols, illustrated in FIGS. 3–5, can be considered the most efficient because they achieve the maximum data transfer efficiency possible for any common I/O synchronous SRAM for the exemplary operation sequence.
As discussed above, the data transfer efficiency of common I/O synchronous SRAMs varies depending on the particular operation sequence. For the most efficient common I/O synchronous SRAMs, the data transfer efficiency is a maximum of 100% during operation sequences which consist of either all read operations or all write operations, with no operation transitions between read operations and write operations. Also, for the most efficient common I/O synchronous SRAMs, the data transfer efficiency is a minimums of 50%, when the sequence alternates between read and write operations every cycle.
The data transfer efficiency of most efficient common I/O synchronous SRAMs for various operation sequences is illustrated in the graphs of FIG. 6. Within FIG. 6, the horizontal axis corresponds to the average number of consecutive read operations in a given operation sequence and the various plots correspond to the average number of consecutive write operations in a given operation sequence. Each point on the various plots within FIG. 6 corresponds to the data transfer efficiency of that particular combination of consecutive read operations and consecutive write operations.
A typical interface between an SRAM controller and a separate I/O synchronous SRAM is illustrated in FIG. 7. The SRAM controller 28 is coupled to the SRAM 40 to provide a clock input on the clock input signal line 30, to provide address inputs on multiple address input signal lines 32 and to provide operation control inputs on multiple control input signal lines 34. The SRAM controller 28 is also coupled to the SRAM 40 by multiple data input signal lines 36 and multiple data output signal lines 38 forming the uni-directional data bus. The control input signal lines 34 include two signal lines to specify a current operation, including read, write and deselect operations.
Separate I/O synchronous SRAMs can be divided into two categories. The first category, category 1, includes separate I/O synchronous SRAMs that transfer data in two clock cycles, including SRAMs that operate according to the single data rate burst of two and double data rate burst of four protocols. The second category, category 2, includes separate I/O synchronous SRAMs that transfer data in one clock cycle, including SRAMs that operate according to the single data rate burst of one and double data rate burst of two protocols.
Category 1 separate I/O synchronous SRAMs use uni-directional data buses to transfer data, thereby eliminating the need to insert any dead cycles on their data buses between operation transitions. Therefore, category 1 separate I/O synchronous SRAMs do not require any deselect operations to be inserted between operation transitions. However, category 1 separate I/O synchronous SRAMs do require deselect operations to be inserted in certain operation sequences. Specifically, between each read to read operation transition and each write to write operation transition, one deselect operation must be inserted because each read and write operation initiated to a category 1 separate I/O synchronous SRAM occupies its associated data bus for two clock cycles. Consequently, the data transfer efficiency of category 1 separate I/O synchronous SRAMs varies depending on the particular operation sequence. The more read to read and write to write operation transitions within the sequence, the more deselect operations that are necessary. The more deselect operations necessary in the sequence, the lower the data transfer efficiency will be.
The deselect requirements and data transfer efficiency for all three category 1 separate I/O operation protocols are described below in reference to the same exemplary operation sequence discussed above: READ, WRITE, READ, READ, WRITE, WRITE, WRITE. Timing diagrams of this exemplary operation sequence for the category 1 separate I/O operation protocols are illustrated in FIGS. 8–10 and will be discussed below. In these examples, the data transfer efficiency is calculated based on a repeating sequence of these seven operations, and is determined by the number of deselect operations that must be inserted in the sequence. Each of the timing diagrams included in the examples of FIGS. 8–10 include the same clock signal and the same address and data information. The only difference between the examples of FIGS. 8–10, is the number of deselect operations required between operation transitions, which changes the timing of the occurrence of the address and data information. When calculating the data transfer efficiency of the different protocols for this exemplary operation sequence, the only number that varies between the different operation protocols is the number of deselect operations within the sequence. As described above, for category 1 separate I/O protocols, the number of data buses implemented (N1) and the number of clock cycles needed to transfer all of the data associated with each read and write operation (N2) are both equal to two. In this exemplary sequence the number of read operations (R) is equal to three and the number of write operations (W) is equal to four.
A timing diagram of the register-register, early write single data rate burst of two, category 1 separate I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 8. Using this operation protocol, one deselect operation is required between each read to read and write to write operation transition to insert a dead cycle on the data bus. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to three, which is one for each read to read and write to write transition. Accordingly, the data transfer efficiency for this protocol is equal to 70%, for this example, as calculated in equation (6) below:DTE=(2*7)/(2*10)=70%  (6)
A timing diagram of the register-register, late write, single data rate burst of two, category 1 separate I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 9. Using this operation protocol, one deselect operation is required between each read to read and write to write operation transition to insert a dead cycle on the data bus. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to three, which is one for each read to read and write to write transition. Accordingly, the data transfer efficiency for this protocol is equal to 70%, for this example, as calculated in equation (7) below:DTE=(2*7)/(2*10)=70%  (7)
A timing diagram of the register-register, late write, double data rate burst of four, category 1 separate I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 10. Using this operation protocol, one deselect operation is required between each read to read and write to write operation transition to insert a dead cycle on the data bus. The number of deselect operations (D) necessary in the exemplary sequence is therefore equal to three, which is one for each read to read and write to write transition. Accordingly, the data transfer efficiency for this protocol is equal to 70%, for this example, as calculated in equation (8) below:DTE=(2*7)/(2*10)=70%  (8)
It should be noted that the data transfer efficiency of all three of the category 1 separate I/O operation protocols is the same, and at 70% is greater than the data transfer efficiency of the most efficient common I/O operation protocols for this particular exemplary operation sequence.
As discussed above the data transfer efficiency of category 1 separate I/O synchronous SRAMs varies depending on the operation sequence. The data transfer efficiency is a maximum of 100% when the number of read to read and write to write transitions in a given operation sequence is equal to zero. Accordingly, to maximize the data transfer efficiency for category 1 separate I/O operation protocols, the sequence must alternate between read and write operations every cycle. The data transfer efficiency for category 1 separate I/O operation protocols is a minimum of 50% when the sequence is either all read operations or all write operations, thereby maximizing the number of read to read or write to write transitions in the sequence.
The data transfer efficiency of the category 1 separate I/O synchronous SRAMs for various operation sequences is illustrated in the graphs of FIG. 11. Within FIG. 11, the horizontal axis corresponds to the average number of consecutive read operations in a given operation sequence and the various plots correspond to the average number of consecutive write operations in a given operation sequence. Each point on the various plots within FIG. 11 corresponds to the data transfer efficiency of that particular combination of consecutive read operations and consecutive write operations.
Category 2 separate I/O synchronous SRAMs use uni-directional data buses to transfer data. Because each read and write operation initiated to a category 2 separate I/O synchronous SRAM occupies its associated data bus for only one clock cycle, the category 2 separate I/O synchronous SRAMs do not require deselect operations to be inserted between any sequence of read and write operations. Accordingly the data transfer efficiency of all three category 2 separate I/O operation protocols is independent of the operation sequence.
The deselect requirements and data transfer efficiency for all three category 2 separate I/O operation protocols are described below in reference to the same exemplary operation sequence discussed above: READ, WRITE, READ, READ. WRITE, WRITE, WRITE. Timing diagrams of this sequence for category 2 separate I/O operation protocol are illustrated in FIGS. 12–14 and will be discussed below. In these examples, the data transfer efficiency is calculated based on a repeating sequence of these seven operations. Each of the timing diagrams included in the examples of FIGS. 12–14 include the same clock signal and the same address and data information. The only difference between the examples of FIGS. 12–14, is the particular operation protocol, which changes the timing of the occurrence of the address and data information. When calculating the data transfer efficiency of the different protocols for this exemplary operation sequence because no deselect operations need to be added, each of the numbers within the data transfer efficiency calculation are the same for each of the category 2 separate I/O operation protocols. As described above, for category 2 separate I/O protocols, the number of data buses implemented (N1) is equal to two and the number of clock cycles needed to transfer all of the data associated with each read and write operation (N2) is equal to one. In this exemplary sequence the number of read operations (R) is equal to three and the number of write operations (W) is equal to four. Because no deselect operations are necessary for category 2 separate I/O protocols, the number of deselect operations (D) necessary in the exemplary sequence is therefore equal to zero.
A timing diagram of the register-register, early write, single data rate burst of one, category 2 separate I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 12. Using this operation protocol, no deselect operations are required. Accordingly, the data transfer efficiency for this protocol is equal to 50%, for this example, as calculated in equation (9) below:DTE=(1*7)/(2*7)=50%  (9)
A timing diagram of the register-register, late write, single data rate burst of one, category 2 separate I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 13. Using this operation protocol, no deselect operations are required. Accordingly, the data transfer efficiency for this protocol is equal to 50%. for this example, as calculated in equation (10) below:DTE=(1*7)/(2*7)=50%  (10)
A timing diagram of the register-register, late write, double data rate burst of two, category 2 separate I/O operation protocol, applied to the exemplary operation sequence, is illustrated in FIG. 14. Using this operation protocol, no deselect operations are required. Accordingly, the data transfer efficiency for this protocol is equal to 50%, for this example, as calculated in equation (11) below:DTE=(1*7)/(2*7)=50%  (11)
It should be noted that the data transfer efficiency for all three of the category 2 separate I/O operation protocols is equal to 50% and is less than the data transfer efficiency of all three of the most efficient common I/O operation protocols and all three category 1 separate I/O operation protocols for this particular exemplary operation sequence. As discussed above, the data transfer efficiency of category 2 separate I/O synchronous SRAMs is independent of the operation sequence. The data transfer efficiency of category 2 separate I/O synchronous SRAMs is always 50%.
Of the three categories of synchronous SRAM operation protocols that have been discussed, the data transfer efficiency of the category 2, separate I/O operation protocols is the lowest of the three. The data transfer efficiency of the category 2, separate I/O operation protocols is equal to 50%, regardless of the operation sequence. The data transfer efficiency of the other two categories of operation protocols, common I/O and category 1 separate I/O, are in the range of 50% to 100%, and depend on the particular operation sequence. The data transfer efficiency of the common I/O operation protocol is maximized for operation sequences with many read to read and write to write operation transitions and few read to write and write to read operation transitions. The data transfer efficiency of the category 1 separate I/O operation protocol is maximized for operation sequences with many read to write and write to read operation transitions and few read to read and write to write operation transitions. Currently, there is not a single system or protocol which maximizes the data transfer efficiency for operation sequences with many read to read and write to write transitions and many read to write and write to read operation transitions.