Memory devices are provided in different types, and are usually classified according to the technology which they use in order to store data. For example, there exist Dynamic RAM (Random Access Memory), or DRAM, devices, including SDRAM (Synchronous DRAM) devices. Another increasingly common category of memory device is SRAM (Static RAM) devices, and yet another category is made up of flash memory devices.
The operation of these devices is such that, when data is to be read out of the device, a command is sent from the requesting device, or, more specifically, from a memory controller within the requesting device.
The requested data is typically broken up into blocks of a particular length, and a separate access request is sent for each of the blocks to be requested.
In addition to the time required to read data out of the external memory device, there is also a certain amount of processing time, required to deal with each read access request.
Thus, the total time required in order to complete the read operation is longer than the time which is actually taken to read the data out of the memory device.
One attempt to mitigate this inefficiency, which has been used in the case of SDRAM devices at least, is to use a technique known as ‘prefetching’. That is, even before a memory controller receives a read access request from a bus master device, it begins the operation of reading data from the external memory device.
This has the result that, provided that the memory controller then receives a request for the data which it has begun to read, it will be able to process that request more quickly than it would otherwise be able to do. However, it has the disadvantage that, if the next read access request, which is received by the memory controller, requests data which is not the data which the memory controller has already begun to read in the prefetching operation, the result may be that the requested data is actually retrieved more slowly than would have been the case without prefetching.