1. Field of the Invention
The present invention relates to a semiconductor memory device, in particular, to a static semiconductor memory device (SRAM: Static Random Access Memory). More specifically, the present invention relates to a construction of an internal data read and data transfer portion of the SRAM.
2. Description of the Background Art
An SRAM has a memory cell formed with a latch circuit, and complementary data are kept at storage nodes inside the memory cell. Therefore, an SRAM cell can stably store data while power is supplied, and refreshing of stored data is not required in contrast to a construction of a DRAM (Dynamic Random Access Memory), which in turn stores information according to electric charges of a capacitor. Thus, the SRAM is controlled more easily than the DRAM, and is widely used in various processing systems.
In addition, since memory cell data can be accurately read from the SRAM even when a row and a column (a word line and a bit line) are selected at the same time, the SRAM allows high-speed access and has a shorter cycle time as compared with the DRAM, and is widely used as a high-speed memory such as a cache memory.
As processing systems become faster in recent years, further high-speed access is required for various memories including the SRAM. Prior art document 1 (Japanese Patent Laying-Open No. 06-333389) shows an example of a construction for implementing such speedup of a semiconductor memory device.
Prior art document 1 discloses a construction for speeding up data reading in a DRAM. Specifically, in the construction described in prior art document 1, a voltage level of a column selection signal is boosted to a level higher than an internal power supply voltage for connecting a bit line (sense amplifier) of a selected column to a common data line through a low resistance, in order to increase a transconductance, gm, of a column selection gate for connecting the selected column to the common data line.
In the construction described in prior art document 1, a memory array is formed into a block division structure so as to divide a bit line, and a sense amplifier is arranged between divided bit lines to form a so-called “shared sense amplifier” construction. For implementing high-speed reading, a load of the bit line is reduced, a read voltage of a memory cell to the sense amplifier is increased, and in addition, memory cell data is transferred to the sense amplifier at a high speed.
Further, prior art document 2 (Japanese Patent Laying-Open No. 06-119785) shows a construction aiming speed up of a sense amplifier in a data read portion of the SRAM. In the construction described in prior art document 2, a bit line pair of a selected column is coupled to internal data lines. A variation of a signal on the internal data line is detected with a current mirror type sense amplifier. In prior art document 2, current mirror sense amplifier is provided in two stages, in order to obtain a symmetric waveform of a read signal of the sense amplifier. Complementary mirror currents are generated in a first stage sense amplifier according to complementary signals of an internal data line pair, and the complementary mirror currents are used to drive a second stage sense amplifier to transfer final read data to a main amplifier or an output buffer.
In prior art document 2, a bus load circuit for limiting a signal amplitude of an internal data bus line is also arranged to limit the signal amplitude to implement a high-speed internal data transfer.
Prior art document 3 (Japanese Patent Laying-Open No. 59-139193) shows a construction for reading data at a high speed, in which an internal data line is provided for each of two memory planes, and the internal data line provided for a selected memory plane is connected to a sense amplifier via a switch circuit. In the construction described in prior art document 3, a memory mat is divided into two memory planes in a row direction, and each memory plane includes static memory cells arranged in rows and columns. The internal data line pair is arranged corresponding to each memory plane. A bit line pair of a selected column is coupled to the corresponding internal data lines through a column selection circuit of the selected memory plane. Then, the internal data line is coupled to the sense amplifier via the switch circuit to read data. With a division structure of the internal data line, the number of column selection gates in the column selection circuit connected to each internal data line is decreased, and a parasitic capacitance of the internal data line is correspondingly decreased to transmit read data from a selected bit line to the sense amplifier at a high speed.
Prior art document 4 (Japanese Patent Laying-Open No. 10-106265) shows a construction intended to speed up writing and reading of data. In the construction disclosed in prior art document 4, a memory mat is divided into two memory blocks along a bit line direction. A common bit line (an internal data line) is arranged for each memory block, and a bit line of a selected column is coupled to a corresponding common bit line. A sense amplifier and a write driver are arranged in common to the memory blocks. The common bit line of a selected memory block is selected by a selection circuit and coupled to the sense amplifier and the write driver.
With a division structure of the bit line in prior art document 4, the number of memory cells connected to one bit line is decreased, and a bit line load is correspondingly decreased. Charging and discharging (including precharging) of the bit line are performed faster due to this decreased bit line load, and an access time is decreased.
In the construction described in prior art document 1, a connection resistance between the selected column and the common data line in the shared sense amplifier construction of the DRAM is decreased. In the DRAM, however, sense amplifiers are arranged corresponding to the respective memory cell columns (bit line pairs), and each bit line pair of a selected memory block is coupled to a corresponding sense amplifier via a bit line isolation gate. The sense amplifier (bit line pair) of the selected column is coupled to the common data line through the column selection gate. The common data line is arranged extending for a long distance to transfer internal read data to an output buffer circuit, and has a large load. In addition, a main amplifier for amplifying the internal read data and a write driver for writing data are further connected to the common data line, and therefore the load becomes large.
Prior art document 1 merely describes the construction in which the bit line pair (sense amplifier) of the selected column is connected to the common data line through a low resistance, and an effect of the load of the common data line on data reading as well as a construction for decreasing the load of the common data line are not considered. In the SRAM, the sense amplifier is coupled to the bit line pair of a selected column via the internal data line. Therefore, the SRAM sense amplifier itself has to amplify, at a high speed, a signal amplitude corresponding to memory cell data appearing on the internal data line. As described above, the internal data line has the write driver and others are coupled thereto and is large in load. Therefore, the shared sense amplifier construction of the DRAM as described in prior art document 1 cannot be simply applied to a portion of a sense amplifier of the SRAM.
In addition, in the DRAM, after memory cell data are amplified and latched by the sense amplifiers, a column selection operation is performed and the bit line pair (sense amplifier) of a selected column is coupled to the common data line. Therefore, the construction of the DRAM sense amplifier of prior art document 1 cannot be applied to the construction of the SRAM in which a signal amplitude corresponding to memory cell data of a selected column is transmitted to and amplified by the sense amplifier to generate an internal read data.
In the construction described in prior art document 2, a plurality of stages of sense amplifiers are cascaded to generate internal read data having a symmetric signal waveform for transferring internal data of a small amplitude. Prior art document 2 also shows a block division structure in which the internal data bus is arranged in common to a plurality of memory blocks and memory cell data of a selected block is read. A local data line is arranged in each block, and such local data line is driven according to the memory cell data by a read amplifier having a function of column selection, and a signal of the local data line is further amplified by a local sense amplifier. A block read amplifier for a selected memory block is activated to drive a common internal data line according to an output signal of a corresponding local sense amplifier.
A block read amplifier is arranged on the common internal data line corresponding to each memory block, and a load of each block read amplifier is coupled to the common internal data line. The common internal data line is coupled to a sense main amplifier for generating final internal data. In the construction described in prior art document 2, in order to generate data of a symmetric signal waveform, a main amplifier is coupled to a common internal data bus having a large load in parallel with a sense amplifier for generating complementary currents according to a voltage of the common data line. Prior art document 2 does not consider a reduction in a load of the common internal data bus in the construction in which the memory array is formed into the block division structure and a selected block transmits the internal read data to the sense amplifier via the common data line. In other words, prior art document 2 intends only shaping of a signal waveform to perform an internal data transfer at a high speed regardless of a variation in sense amplifier load, and does not consider a problem of a data transfer speed when the internal data bus has a large load, or a construction for speeding up data reading by reducing the load of the internal data bus.
In the construction disclosed in prior art document 3, two memory planes are arranged along a word line direction, and the internal data line arranged corresponding to the selected memory plane is coupled to the sense amplifier. Therefore, since the memory plane is not divided in a bit line direction in this construction, when the number of memory cells is increased in the bit line direction, a bit line load is accordingly increased and therefore high-speed reading cannot be implemented.
Prior art document 3 merely considers forming of the internal data line into a division structure to reduce the load of the internal data line, and does not consider reduction in the bit line load for the sense amplifier.
In the construction disclosed in prior art document 4, the memory mat is divided into two memory blocks, and a common data line selection circuit and a sense amplifier/write driver are arranged between the two memory blocks. Therefore, the bit line load can be halved with the bit line division structure as compared with a bit line non-division structure. When the number of the memory cells is further increased, however, the load of the bit line is accordingly increased and thus high-speed writing/reading of data cannot be implemented. Although prior art document 4 describes a divided-in-two structure for the bit line, a problem of increase in bit line load when the number of the memory cells is further increased in the bit line direction is not considered.