When data is written to dynamic random access memory (hereafter referred to as “DRAM”), the row address of a corresponding bank (called “row address A” in this specification) is read to a row buffer, and the data is written to the contents specified by a column address. In this case, when the row address is once stored in the row buffer, different column addresses can be accessed in the same row. On the other hand, when it is necessary to access a different row address in the same bank (hereafter referred to as a “row address B”), the row address A in the current row buffer is to be written back, and the row address B is to be stored in the row buffer. The writing operation in the DRAM is called “precharge”.
In the request queue in a memory controller, each request is issued on a priority basis in principle in order from the request (memory access request) which has been kept waiting for the longest time in the request queue. To gain access efficiently by making the best of the features of the DRAM above, the scheduling system in which a plurality of requests having the same row address are collectively issued is used. The memory controller determines which row address is to be next stored in the row buffer according to the algorithm above.
The system of pushing out data from the cache according to the prediction of serial addresses or based on the access history, the system of reducing the frequency of main storage access, etc. are proposed. (Patent Documents 1 and 2)
Patent Document 1: Japanese Laid-open Patent Publication No. 9-2444957
Patent Document 2: Japanese Laid-open Patent Publication No. 6-103169
Patent Document 3: Japanese Laid-open Patent Publication No. 2005-518606
Patent Document 4: Japanese Laid-open Patent Publication No. 2005-174342