1. Field of the Invention
The present invention relates to a semiconductor memory and, more particularly, to a semiconductor memory having cache holding means for temporarily holding data on a row address basis, i.e., cache data.
2. Description of the Related Art
A semiconductor memory of the type described has a main memory implemented as a low-speed large-capacity DRAM (Dynamic Random Access Memory), and a buffer memory or cache memory for the high-speed access of the DRAM. The cache memory is implemented as high-speed small-capacity registers or a bipolar RAM (Random Access Memory) capable of temporarily storing the data of the DRAM on a row basis. The cache memory which is expensive is sometimes replaced with a high-speed access mechanism, e.g., page mode or static column mode available with the DRAM. The cache memory and the substitute therefor will hereafter be referred to as cache holding means.
Specifically, data stored in the main memory on a row address basis and which a CPU (Central Processing Unit) is likely to need, i.e., cache data are copied in the cache holding means. If the data at the time of a memory access from the CPU is coincident with any one of the cache data (cache hit), the CPU receives the cache data within the access time of the cache holding means. If the data of the above address does not coincide with any of the cache data (cache miss), the CPU receives necessary data from the main memory in a usual memory access cycle. Hence, the CPU achieves a access at a higher speed in the event of a cache hit than in the event of a cache miss.
A cache hit ratio, i.e., a ratio of cache hits to memory accesses must be increased in order to improve the performance of a computer system. Generally, the cache hit ratio can be improved if the cache holding means is provided with a greater number of cache data blocks independent of each other, i.e., a greater number of entries. A semiconductor memory directed toward a greater number of entries is taught in, e.g., Japanese Patent Laid-Open Publication No. 3-21289. In the memory taught in this document, one cache holding means is assigned to each sense amplifier corresponding to a row decoder in order to increase the number of entries.
Today, microtechnologies derived from advanced DRAM implementations, including high integration and large capacity, made it possible to provide a memory with row address select lines (referred to as word lines hereinafter) consisting of main word lines and subword lines. This kind of memory structure successfully promotes rapid access to the memory.
Generally, the above described type of semiconductor memory has a memory circuit including cache holding means, a memory controller for feeding addresses and select signals to the memory circuit, and a data bus. In response to an address and a select signal, the memory circuit compares the address with the row addresses of data stored in the cache holding means. If the two addresses are coincident, the memory circuit outputs the corresponding data to the data bus while outputting a corresponding response to the memory controller. If the addresses are not coincident, the memory circuit accesses memory cell data by a usual memory access, outputs the data to the data bus, and outputs a response to the memory controller.
In the memory taught in the previously mentioned Japanese Patent Laid-Open Publication No. 3-21289, the cache data to be dealt with by the cache holding means has a unit size corresponding to data read out by a single access from the row decoder, i.e., a word. This brings about a problem that the same number of entries is required even with data which is sized smaller than a word and can be held in a distributed manner. This limits the freedom available with the memory and prevents the number of entries from being increased. Moreover, because the cache holding means hold data on a row decoder basis, it cannot hold a plurality of independent data in the column direction without resorting to extra holding means.