Signaling rate advances continue to outpace core access time improvement in dynamic random access memories (DRAMs), leading to device architectures that prefetch ever larger amounts of data from the core to meet peak data transfer rates. The trend in a number of data processing applications, however, is toward finer-grained memory access so that prefetching large quantities of data to in an effort to reach peak data transfer rates may result in retrieval of a substantial amount of unneeded data, wasting power and increasing thermal loading. Although DRAM architectures that output only a selected portion of prefetched data have been proposed, such architectures generally prefetch an amount of data that corresponds to the maximum prefetch size, completely filling a data buffer in the prefetch operation, then outputting only a portion of the buffered data. Consequently, a substantial quantity of non-requested data may be retrieved from the core and stored in the data buffer of such selectable-output-size devices, thus needlessly consuming power and increasing thermal loading.
In addition, as costs of producing successive generations of semiconductor devices escalate, it becomes increasing desirable to extend the operating frequency range of the current device generation. Unfortunately, increasing the operating frequency range for data transfers puts stress on the core access times to keep pace. Consequently, solutions that meet a wide data transfer range without over-stressing the core speed are highly desirable.