Operating frequency of dynamic random-access memory (DRAM) has increased for each generation, and the data to be simultaneously accessed during memory access operations for a READ command or a WRITE command has also increased correspondingly with the operating frequency. In read operations, to achieve an “n”-times data rate, read data is typically accessed in “n” number of bits from DRAM arrays (e.g., prefetch) to a first in first out (FIFO) multiplexer (mux), which then undergoes a parallel-to-serial conversion in one column cycle. The number of bits provided by the memory cell array is referred to as the prefetch size. Thus, in this example, the prefetch size is “n.”
In conventional devices, in order to realize a 16 times data rate, one option is to use a prefetch size of 16n. This, however, corresponds to a burst length of 16 data words, which is incompatible with a typical cache line size of 64 bytes utilizing a conventional 64-bit data bus. Alternatively, to realize the same data rate as a prefetch size of 16n while utilizing a conventional circuit structure for a prefetch size of 8n, the period of the column cycle must be halved (e.g. double the core speed), which may present challenges with circuit complexity and timing.