A computer system generally includes a CPU for executing instructions on given tasks and a main memory for storing data, programs or the like requested by the CPU. To enhance the performance of the computer system, it is basically requested to increase the operating speed of the CPU and also make an access time to the main memory as short as possible, so that the CPU can operate at least with no wait states. Operation clock cycles of modern CPUs such as recent microprocessors are shortening more and more as clock frequencies of 33, 66, 100 MHZ or the like. However, the operating speed of a high density DRAM, which is still the cheapest memory on a price-per-bit base and using as a main memory device, has not been able to keep up with that of the CPU being speeded up. DRAM inherently has a minimum RAS access time, i.e., the minimum period of time between activation of RAS, upon which the signal RAS changes from a high level to a low level, and the output of data from a chip thereof with column addresses latched by activation of CAS. Such a RAS access time is called a RAS latency, and the time duration between the activation of the signal CAS and the output of data therefrom called a CAS latency. Moreover, a precharging time is required prior to re-access following the completion of a read operation or cycle. These factors decrease the total amount of operation speed of the DRAM, thereby causing the CPU to have wait states.
To compensate for the gap between the operation speed of the CPU and that of the main memory like the DRAM, the computer system includes an expensive high-speed buffer memory such as a cache memory which is arranged between the CPU and the main memory. The cache memory stores information data from the main memory which is requested by the CPU. Whenever the CPU issues the request for the data, a cache memory controller intercepts it and checks the cache memory to see if the data is stored in the cache memory. If the requested data exists therein, it is called a cache hit, and high-speed data transfer is immediately performed from the cache memory to the CPU. Whereas if there is no presence therein, it is called a cache miss, and the cache memory controller reads out the data from the slower main memory. The read-out data is stored in the cache memory and sent to the CPU. Thus, a subsequent request for this data may be immediately read out from the cache memory. That is, in case of the cache hit, the high-speed data transfer may be accomplished from the cache memory. However, in case of the cache miss, the high-speed data transfer from the main memory to the CPU cannot be expected, thereby incurring wait states of the CPU. Thus, it is extremely important to design DRAMs serving as the main memory to accomplish high-speed operations.
The data transfer between DRAMs and the CPU or the cache memory is accomplished with sequential information or data blocks. To transfer the continuous data at a high speed, various kinds of operating modes such as page, static column, nibble mode or the like have implemented in the DRAM. These operating modes are disclosed in U.S. Pat. Nos. 3,969,706 and 4,750,839. The memory cell array of the DRAM with the nibble mode is divided into four equal parts so that a plurality of memory cells can be made access with the same address. Data is temporarily stored in a shift register to be sequentially read out or written into. However, since the DRAM with the nibble mode cannot continuously transfer more than 5-bit data, the flexibility of the system design cannot be offered upon the application to high-speed data transfer systems. The page mode and the static column mode, after the selection of the same row address in a RAS timing, can sequentially access column addresses in synchronism with CAS toggling or cycles and with the transition detections of column addresses, respectively. However, since the DRAM with the page or the static column mode needs extra time, such as a setup and a hold times of the column address, for receiving the next new column address after the selection of a column address, it is impossible to access the continuous data at a memory bandwidth higher than 100 Mbits/sec., i.e., to reduce a CAS cycle time below 10 nsec. Also, since the arbitrary reduction of the CAS cycle time in the page mode cannot guarantee a sufficient column selection time to write data into selected memory cells during a write operation, error data may be written thereinto. However, since these high-speed operation modes are not operations synchronous to the system clock of the CPU, the data transfer system must use a newly designed DRAM controller whenever a CPU having higher speed is replaced. Thus, to keep up with high-speed microprocessors such as CISC and RISC types, the development of a synchronous DRAM is required which is capable of accessing the data synchronous to the system clock of the microprocessor at a high speed. An introduction to synchronous DRAMs appears with no disclosure of detailed circuits in the NIKKEI MICRODEVICES in April, 1992, Pages 158-161.
To increase the convenience of use and also enlarge the range of applications, it is more desirable to allow an on-chip synchronous DRAM to not only operate at various frequencies of the system clock, but also be programmed to have various operation modes such as a latency depending on each clock frequency, a burst length or size defining the number of output bits, a column addressing way or type, and so on. Examples for selecting an operation mode in DRAM are disclosed in U.S. Pat. No. 4,833,650 issued on May 23, 1989, as well as in U.S. Pat. No. 4,987,325 issued on Jan. 22, 1991 and assigned to the same assignee. These prior art patents disclose technologies to select one operation mode, such as page, static column and nibble modes. Selection of the operation mode in these prior art patents is performed by cutting off fuse elements by means of a laser beam from an external laser apparatus or an electric current from an external power supply, or by selectively wiring bonding pads. However, in these prior technologies, once the operation mode had been selected, the selected operation mode cannot be changed into another operation mode. Thus, the prior art does not permit changes between operation modes even if subsequently required.