The operating frequency of processors tends to increase year by year, but the operating speed of the system memory used in main memory does not increase commensurately. Therefore, it is usual to use a cache memory to bridge the gap between the processor and system memory. Further, the cache memory is formed into a hierarchical form.
Recently, a need has arisen to reduce the power consumption of the entire system. Therefore, it has become necessary to reduce the power consumption of an operating device portion in the system. As one of the techniques for reducing power consumption, consideration has been given to changing the cache memory from a volatile memory such as a static random access memory (SRAM) to a nonvolatile memory such as a magnetic random access memory (MRAM).
Since the standby power of the nonvolatile memory is lower than that of the SRAM, power consumption as a whole can be reduced. On the other hand, there occurs a problem that the operating power becomes high. That is, in the SRAM, the voltage of a bit line is set when a read request is issued and a voltage is applied to a word line to perform a read operation. In this case, since it takes a long time to charge the word line, the power caused by transition of the opened bit line voltage constitutes most of the power consumption when reading during the above period.
In case the MRAM that is the nonvolatile memory is used as the cache memory, a sense amplifier is driven and a current is caused to flow through the memory element of the MRAM to read the memory element when a read request is issued. However, since the sense amplifier is kept driven until reading is complete and it finally becomes possible to read at the stage in which a read current is stabilized, it becomes necessary to continuously pass the current during the above period. Since the time until the current is stabilized is extremely long, the power consumption thereof becomes higher in comparison with that of the SRAM.