1. Field of the Invention
Embodiments of the present invention relate generally to semiconductor memory devices. More particularly, embodiments of the invention relate to a semiconductor memory device having a hierarchical bit line structure and an associated data path.
A claim of priority is made to Korean Patent Application 10-2005-0111566, filed on Nov. 22, 2005, the disclosure of which is hereby incorporated by reference in its entirety.
2. Description of Related Art
Modern computing applications continue to demand semiconductor memory devices with larger capacity, higher performance, and lower power consumption. As a practical matter, it can be difficult to simultaneously achieve all three objectives, as there tend to be tradeoffs between capacity, performance, and power consumption.
As a general rule, the capacity and performance of semiconductor memory devices tends to increase as the density of memory cells in the devices increases. However, there are exceptions to this rule—some ways of increasing memory cell density in a semiconductor memory device can actually have a negative impact on the device's performance.
For example, one way to increase the density of memory cells in a semiconductor memory device is by connecting a larger number of memory cells to each bit line in the device. As the number of memory cells connected to a bit line increases, loading resistance and load capacitance of the bit line tend to increase accordingly. Thus, the time it takes for charges to be transferred from each memory cell to the bitline in a read operation tends to increase, thus deteriorating the performance of the device.
Power consumption for a semiconductor memory device is a function of the voltage level required to read/write data from/to the device. Most contemporary semiconductor memory devices using complementary metal-oxide semiconductor (CMOS) transistors transfer data to/from memory cells at voltages that correspond to the working voltages of the constituent CMOS transistors. For example, CMOS transistor working voltages typically include both “high” and “low” voltages (e.g., VDD and VSS, respectively). Further, many contemporary semiconductor memory devices use so-called “full-swing data” techniques in conjunction with CMOS transistors to communicate data through various data transmission paths, such as a data read path and a data write path. This approach tends to decrease the overall operating speed of the memory device and increase both power consumption and chip size. Within the context at CMOS logic elements be variously defined, but is presently around about 1.2V for static random access memory (SRAM) devices, for example. Full swing signaling normally requires a data voltage swing at least equal to VDD, small swing signaling requires a data voltage less than VDD.
A great deal of effort has gone into the development of high performance memory devices having high density and yet operating with relatively low power consumption. Resulting contemporary devices include, for example, those described in U.S. Pat. Nos. 5,986,914 and 6,822,918, the subject matter of which is hereby incorporated by reference.
Nonetheless, these conventional devices suffer from a number of residual problems, particularly those associated with bit line structures, data read path, and data write path. Several of these apparent problems will now be described in some additional detail as background context to the inventive embodiments that follow.
Figure (FIG.) 1 is a graph illustrating load capacitance as a function of the number of memory cells connected to one bit line in an exemplary, conventional SRAM device. The load capacitance of a bit line, includes load capacitance related to a connected sense amplifier, column transmission (or “pass”) circuitry, and other “residual” components associated with the bit line and peripheral circuits connected to the bit line.
Within the graph of FIG. 1, the total bit line load capacitance comprises a “YPATH” component indicating a portion of the load capacitance associated with the column transmission circuitry, and a “SenseAmp” component indicating a portion of the load capacitance associated with the associated sense amplifier. In operation, the column transmission circuitry receives a column address adapted to select and electrically connect a bit line with a sense amplifier, and generally comprises a plurality of column transmission gates.
As shown in FIG. 1, in a case where the number of memory cells connected to the bit line is 128, the residual load capacitance (i.e., the load capacitance above and beyond that associated with the sense amplifier and the column transmission circuitry) amounts to about 25%. However, the residual bit line load capacitance rises to 39% for 256 memory cells connected to the bit line, 54% for 512 connected memory cells, and 70% for 1024 connected memory cells. Of note, this trend increases for even more memory cells connected to the bit line.
Thus, if the number of memory cells connected to a single bit line is increased in order to increase the density of a semiconductor memory device, the resulting load capacitance will increase, thereby decreasing the operating speed of the device.
FIG. 2 is a schematic diagram illustrates an exemplary, conventional SRAM device having bit lines suffering from a large load capacitance.
With reference to FIG. 2, the structure includes word lines WL0, WL1, . . . , and WLn-1, a column decoder YDEC 20, column pass gates YPASS, 22 and 24, a plurality of memory cells MC, bit line pairs BLm-1, BLm-1B, BLm and BLmB and a sense amplifier 26.
In operation, word lines WL0, WL1, . . . , and WLn-1 are selected by a row decoder (not shown). Column decoder 20 receives a column address YA, and outputs a column selection signal as a decoded signal. Column pass gates 22 and 24 receive the column selection signal, and electrically connect a bit line pair connected to a memory cell MC designated by the column address YA, with sense amplifier 26.
Bit line pairs BLm-1, BLm-1B, BLm and BLmB transmit data from the connected plurality of memory cells MC, or transmit data to the memory cells MC. The plurality of memory cells MC are disposed and connected at intersections of the bit line pairs BLm-1, BLm-1B, BLm and BLmB and word lines WL0, WL1, . . . , and WLn-1.
Sense amplifier 26 senses and amplifies a signal output from a bit line selected by a column selection signal.
In general, an SRAM includes a plurality of “memory mats”. Each memory mat may be divided into a plurality of sub memory mats, or “sub mats”. Further, each sub mat may be divided into a plurality of memory “blocks”. Each memory block typically includes a plurality of sense amplifiers that are divided and disposed in relation to an input/output(I/O) port. Each sense amplifier is shared by bit line pairs, the number of which corresponds to the number of column bits within each memory block.
In one embodiment, for example, the number of column bits within each memory block is assumed to be 32, and the number of I/O ports is 9. Thus, each sense amplifier is shared by 32 bit line pairs and is adapted to an I/O port. Column pass gates 22 and 24 are allocated across a bit line pair to receive the column selection signal and electrically connect the corresponding bit line pair with a sense amplifier.
Assuming the number of column bits within each memory block of an exemplary SRAM is 32 and the number of I/O ports is 9, sense amplifier 26 will be shared by 32 bit line pairs, and the number of column pass gates YPASS associated with bit line pairs is also 32. (This will change to 64, for example, if the number of row bits assumed for each memory block were 64, for example).
Within this exemplary context, efforts have been made to reduce the number of memory cells connected to each bit line within each memory block in order to avoid overly high load capacitances that adversely effect data transmission speed in a constituent memory device. One exemplary method in this regard is described in relation to FIG. 3.
FIG. 3 is a schematic diagram illustrating operation of an exemplary, conventional SRAM having reduced load capacitance per bit line.
With reference to FIG. 3, two bit line pairs BLm-1, BLm-1B, BLm, BLmB are shown. The two bit line pairs BLm-1, BLm-1B, BLm, BLmB receive a column selection signal output from a column decoder YDEC 30 within one memory block of an SRAM, and so are selectively connected to sense amplifier 36 through respective column pass gates YPASS 32 and 34.
In comparing the bit line structure of FIG. 3 with the former conventional structure shown in FIG. 2, the memory cells of GIG. 3 connected to one bit line pair are divided into two groups and are controlled separately.
In other words, the “divided bit line structure” illustrated effectively reduces the load capacitance per bit line pair by essentially reducing the number of memory cells connected to the bit line. The memory cells are divided into two groups and each group is separately controlled by a control signal applied to a selection line SL1, SL2.
This method of separately and individually, controlling the divided bit lines will now be described in some additional detail, in the context of an example that assumes that an accessing memory cell MC is connected to an upper bit line pair.
In an example of one bit line pair BLm-1, BLm-1B, the bit line pair BLm-1, BLm-1B is switched and individually, independently, controlled by switching transistors NM31, NM32, NM33 and NM34. When a control signal applied to a control line SL1 is high and a control signal SL2 applied to a control line SL2 is low, a node N31 is high, and a node N32 is low, thus the switching transistors NM31 and NM32 are turned ON, and the switching transistors NM33 and NM34 are turned OFF.
Relative to bit line pair BLm-1, BLm-1B, the bit line in an upper part of switching transistors NM31 and NM32 is called an “upper bit line pair,” and the bit line in a lower part of switching transistors NM33 and NM34 is called a “lower bit line pair.”
The upper bit line pair is electrically connected to a global bit line pair GBLm-1, GBLm-1B, and the lower bit line pair is electrically disconnected from the global bit line pair GBLm-1, GBLm-1B. The global bit line pair GBLm-1, GBLm-1B is electrically connected to sense amplifier 36 through column pass gate 32. Sense amplifier 36 senses, amplifies and outputs data received from the global bit line pair GBLm-1, GBLm-1B.
On the contrary, when an accessing memory cell MC is connected to the lower bit line pair, a control signal applied to selection line SL1 is low, and a control signal applied to selection line SL2 is high.
Thus, the lower bit line pair is connected to the global bit line pair GBLm-1, GBLm-1B, and the global bit line pair GBLm-1, GBLm-1B is electrically connected to sense amplifier 36 through column pass gate 32. Sense amplifier 36 senses, amplifies, and outputs data received from the global bit line pair GBLm-1, GBLm-1B.
Thus, one conventional approach to addressing the problem of high bit line load capacitance results in the provision of a memory device, like the one illustrated in FIG. 3, including global bit line pairs GBLm-1, GBLm-1B, GBLm, GBLmB, switching transistors NM31, NM32, NM33, NM34, NM35, NM36, NM37 and NM38, and selection lines SL1, SL2.
This reduction in overall load capacitance is the result of reduced residual bit line capacitance. Yet, the portion of bit line load capacitance associated with column transmission circuitry (e.g., column pass gates 22 and 24) remains unchanged by the foregoing solution illustrated in FIG. 3.
FIG. 4 is a circuit diagram further illustrating in some additional detail an exemplary column pass gate YPASS, such as those used in conjunction with the circuits shown in FIGS. 2 and 3.
The typical column pass gate YPASS receives read/write information RCON and a column address YA, and selects a bit line pair connected to an accessing memory cell to discriminate a data read path from a data write path. The column pass gate YPASS also receives a column selection signal Yai.
In a data read operation applied to the column indicated by the column selection signal Yai, the column selection signal Yai and the read/write information RCON go high. As a result, a bit line pair BL, BLB and a read line pair LRSDL, LRSDLB are connected electrically.
In a data write operation, only the column selection signal Yai goes high. As a result, the bit line pair BL, BLB and a write line pair LWSDL, LWSDLB are connected electrically.
FIG. 5 is a block diagram schematically illustrating exemplary sub mats in an SRAM and adapted to provide a conventional data read path. Each sub mat includes a plurality of memory blocks. A first sub mat SMAT1 includes a plurality of memory blocks BLK1˜BLK8, and a second sub mat SMAT2 includes a plurality of memory blocks BLK11˜BLK18.
Each of the plurality of memory blocks BLK1˜BLK8, and BLK11˜BLK18 includes a first sense amplifier BSA1 and a second sense amplifier BSA2. FIG. 5 illustrates only one first-sense amplifier BSA1 and one second-sense amplifier BSA2, but a plurality of first sense amplifiers BSA1 and a plurality of second sense amplifiers BSA2 are actually allocated and disposed per I/O port. Thus, within each memory block, the number of I/O ports is equal to the number of the first sense amplifiers BSA1 and to the number of the second sense amplifiers BSA2.
Each of the first sense amplifiers BSA1 senses and amplifies data represented on a bit line selected by an address, and each of the second sense amplifiers BSA2 senses and amplifies data output from each of the first sense amplifiers BSA1.
The second sense amplifiers BSA2 significantly reduce the amount of time required to output a full-swing data at contemporary CMOS levels and/or to provide output data at a well stabilized level. Thus, to increase the speed of a data read operation and/or output data at a well stabilized level, sense amplifiers of several groups are generally used in a conventional SRAM.
Still referring to FIG. 5, main data lines MDL0 and MDL1 transmit data output from the second sense amplifiers BSA2.
The data transmitted through the main data lines MDL0 and MDL1 is applied to a logical NAND gate NAND51, and NANDed before being output to a data output terminal through an output driver (not shown). The main data lines MDL0 and MDL1 are precharged to high during a read operation. Thus, when any one of the main data lines MDL0 and MDL1 goes low, the NAND gate NAND51 outputs a high, and as such, it may be regarded as performing a logical sum operation.
FIG. 6 is a circuit diagram illustrating in some additional detail a data read path for one I/O port within two blocks BLK1 and BLK11 of FIG. 5.
As shown in FIG. 6, a first sense amplifier BSA1, 52 and a second sense amplifier BSA2, 54 are included within one memory block BLK1, and a first sense amplifier BSA1, 56 and a second sense amplifier BSA2, 58 are included in another memory block BLK11.
One bit line pair within the memory block BLK1 is selected by a column address, and data represented in the bit line pair is transmitted to a local section data line pair LSDL, LSDLB. One first-sense amplifier 52 within the memory block BLK1 is enabled by a sense amplifier enable signal BSA1_EN, and primarily senses and amplifies data represented on the local section data line pair LSDL, LSDLB.
Second sense amplifier 54 is enabled by a sense amplifier enable signal BSA2_EN, and secondarily senses and amplifies data output from the first sense amplifier 52. The data output by the second sense amplifier BSA2 is transmitted by the main data line MDL0.
The structure and operation of first sense amplifiers 52 and 56 and second sense amplifiers 54 and 58 are conventionally understood and will not be described in any further detail.
As shown in FIGS. 5 and 6, the number of main data lines as an input terminal of the NAND gate NAND51 is equal to the number of sub mats. The NAND gate NAND51 performs a logical NAND operation for signals input from a plurality of main data lines MDL0 and MDL1.
That is, an exemplary SRAM having the data read path produces a lot of signal delay based on the logical NAND operation, thus lowering its operating speed. First and second sense amplifiers are used for every memory block, thus increasing the chip size of the exemplary SRAM and its power consumption during a read operation.
FIG. 7 is a block diagram schematically illustrating one I/O port for an exemplary SRAM and adapted to provide a conventional data write path.
Referring to FIG. 7, when data is input to a write driver unit WDRV 76, the data is transmitted to a data input line pair DIL, DILB. The data of the data input line pair DIL, DILB is transmitted to a local data input line pair LDIL, LDILB. A column pass gate YPASS 74 receiving a column selection signal output from a column decoder YDEC 70 electrically connects a selected bit line pair BL, BLB with a local data input line pair LDIL, LDILB. Then, the data transmitted to the bit line pair BL, BLB is written to a memory cell selected by an address.
In the data write path, the write driver unit 76 outputs full-swing data at CMOS levels when the data is applied. The full-swing data at CMOS levels is applied to the data input line pair DIL, DILB and the local data input line pair LDIL, LDILB, resulting in higher power consumption and reduced operating speed for the write operation.
Thus it is essentially required to improve a structure of the bit line, and associated data read and data write paths, etc., to realize a semiconductor memory device truly capable of operating at reduced power consumption, but increased operating speed at higher densities.