1. Field of the Invention
The present invention relates to a memory controller which can have memory access, a data processor which has the memory controller and a central processing unit, and a data processing system which has the data processor and a memory. The invention also relates to a technique which is useful when applied to a semiconductor device having the above-mentioned items formed in one package.
2. Description of the Prior Art
A data processor having a central processing unit (CPU) makes access to memories which include a main memory and a cache memory. The main memory stores programs to be run and data to be processed by the CPU. The main memory formed in a semiconductor device is known to be a large-capacity memory which is typically made of volatile memories such as a DRAM (dynamic random access memory) or nonvolatile memories such as a flash memory. The cache memory is made of memories having relatively small capacities such as a SRAM (static random access memory). The cache memory is located between the CPU having a high-speed operation and the main memory which operates slower than the CPU, thereby absorbing difference in their operational speeds.
For a high-speed operation of a data processing system having a CPU, cache memory and main memory, there has been a technique of using the sense amplifiers of the DRAM of main memory in a manner like cache memory. The technique of using DRAM's sense amplifiers in a manner like cache memory will be explained as follows. The data processor first puts out a row address to the DRAM. The DRAM has its word lines selected by a row address, and data of the full one line on the selected word line are transferred to and held by the sense amplifiers. The data processor next puts out a column address to the DRAM. The column address selects certain column switches, causing the sense amplifiers to release the data.
The sense amplifiers hold the data of the full one line of the selected word line continuously after the readout of data. At the next DRAM access by the data processor, if the row address is the same as the previous one, the data processor puts out only a column address. Generally, word line selection takes a relatively long time, whereas by retaining data in the sense amplifiers, it is possible to read out data in a short time for an event of access with the same word line, i.e., access to the same page.
However, the foregoing prior art involves the following problem. In case data is to be read out from a word line which is different from the word line where data are held by sense amplifiers, i.e., at the occurrence of cache error in the cache-wise use of sense amplifiers, it is necessary to cancel the selection of the immediate word line, precharge the data lines, and thereafter select a new word line. The need of precharging at this access results in a longer data read time than usual data readout.
There are several techniques intended to overcome the above-mentioned problem as described in JP-A Nos. 1994-131867, 1995-78106 and 2000-21160.
The JP-A No. 1994-131867 discloses a technique for speeding up the read and write operations of a DRAM, with its sense amplifiers being used as cache memory, even at the occurrence of cache error. Specifically, the DRAM has its data lines divided into data lines which are connected to the memory cells and pre-amplifiers, and global data lines which are connected to the main amplifiers used as cache memory.
It also shows the arrangement of a means of shorting the data lines, which are connected with the memory cells and pre-amplifiers, independently of the global bit lines. This arrangement enables the precharging of the data lines which are connected with memory cells and pre-amplifiers even in the data holding state for one page of the main amplifiers connected to the global data lines, and thus enables the preparation for reading out data from another page, i.e., another word line.
The JP-A No. 1995-78106 discloses a technique for speeding up the read and write operations of a DRAM, with its sense amplifiers for memory banks being used as cache memory, even at the occurrence of alternate access between memory banks. Specifically, a data processing system is provided in its DRAM control circuit with row address memory means in correspondence to the memory banks. This arrangement enables the judgment as to whether the memory access is to the same row address as the previous access, i.e., whether the access is to the same page, for each memory bank, and thus enables the high-speed block data transfer.
The JP-A No. 2000-21160 discloses a technique for the use of sense amplifiers for memory banks of a multi-bank DRAM as cache memory. It shows, with the intention of enhancing the hit rate of sense amplifier cache, a means of advanced reading of data of a predicted address based on the advanced issuance of the next address which is determined by adding a certain offset value to the previous address of the memory bank which has been accessed previously.
The inventors of the present invention have found the unevenness of access to the main memory in reading a program to be run by the central processing unit or reading data out of the main memory. For example, there are a case of frequent access to the same page (same word line) of the main memory, a case of frequent access to different pages, and a case of access to a same page and access to different pages at an equal frequency. The unevenness of access results largely from the characteristic of a program. The inventors of the present invention have found that the above-mentioned prior arts cannot deal with the unevenness of access frequency sufficiently and cannot solve the problem of slower data read/write operations from/to the main memory due to the unevenness.