Semiconductor memories are extensively used in electronics, such as computing devices, mobile devices and other consumer devices. While some memories are used as discrete components, others are embedded with other sub-systems to help realize smaller form-factor mobile devices. For example, microprocessors, digital signal processors (DSP) and application specific integrated circuits (ASIC) have embedded memory. This memory can include volatile memories such as SRAM, DRAM, or non-volatile memories such as Flash memory, for example.
Regardless of the type of memory or how the memory is implemented in the device, an important requirement of the overall system is low power consumption. This requirement is especially important for mobile devices since users prefer to maximize the time the mobile device can be used before recharging, or replacement of the battery is required. While users can turn off the device to truly maximize power conservation, the time required to activate the device from the off state is unacceptably long. In the case of mobile phones, an off device will not receive calls or messages. Most mobile devices are fully active for a short duration of time, and spend the remaining time in a lower power consumption mode such as standby or deep power down. In such modes, the data stored in memory must be retained because the device can “wake up” relatively quickly in response to received data or user intervention. Since most mobile devices operate in the lower power consumption mode for a larger proportion of “on” time of the mobile device, power conservation should be maximized during these low power modes of operation.
Although the non-volatile memories consume little power, memory access operations are slower than SRAM and DRAM memories. Because DRAM arrays are much smaller than equivalent density SRAM, they are preferred for their high storage capacity and smaller size.
In an embedded application, the memory system can be organized in a hierarchical level. FIG. 1 is an example of a DRAM memory macro 10 organized as four identical blocks 12. There can be any number of blocks 12 in memory macro 10. Each block 12 is further divided into four subblocks 14, and each subblock 14 is sub-divided into four sub-arrays 16. Each sub-array 16 can include memory cell array peripheral circuits, also known as core area circuits, such as wordline drivers, bitline sense amplifiers, column select devices, intermediate sense circuits, and other control circuits for writing or reading data from the memory cells of each sub-array. Power supply voltages, such as VDD and VSS, can be routed to all four blocks 12 and their respective circuits. Additional internally regulated voltages derived from VDD and VSS, can be routed to all the blocks 12. Those of skill will understand that the internally regulated voltages can be equal to, less than or greater than VDD or VSS.
Depending on the desired configuration, all n-bits of data provided by memory 10 for any single memory access operation can come from one block 12. Alternately, an equal fraction of bits (n/4) can be provided by all four blocks 12 simultaneously. Within each block 12, data can be read from one or more sub-arrays 16 of one subblock 14 via data buses (DB) (not shown), which can be sensed by data bus sense amplifiers (DBSA) within the block I/O circuit 18 local to each block 12. A macro I/O and control circuit block 19 includes input/output ports for memory macro 10, which can also contain data input and output circuitry, DRAM control and BIST circuitry. Those of skill in the art will understand that a variety of data access configurations of memory macro 10 can be implemented.
Because lower power is desired, the DRAM memory macro 10 should preferably have very low current consumption in standby and deep power down modes. Even in a low power 90 nm process, the leakage current of the transistors in the core area will contribute a significant amount of current while in the standby or deep power down modes. The current leakage problem in small geometry semiconductor circuits is well known in the semiconductor industry.
Current leakage is also a problem during an active operating mode of the memory 10. If the memory 10 of FIG. 1 was configured to provide the n-bits of data from one block 12 during an active memory access, then the remaining three unaccessed blocks can contribute a significant amount to the active current due to leakage current. In the alternate configuration where n/4-bits are provided by all four blocks 12, there can be several unaccessed subblock 14 that consume power due to leakage current.
FIG. 2 shows one possible configuration of the bitline sense amplifier and first stage read select circuits within sub-array 16 of FIG. 1. The presently shown datapath circuit forms part of the read datapath for reading data from the sub-array 16. Write datapath circuitry is not shown or included in this discussion, but those skilled in the art will understand that such circuits are required for writing data to the memory. The shown circuit enables data on complementary bitlines BL0 and BL0* to be sensed and amplified in sense amplifier 42 and transferred to complementary data buses, DB and DB* via the read select circuit 20. BL0 and BL0* are connected to a CMOS cross-coupled bitline sense amplifier 42 that is well known in the art. VDD is provided to the p-channel devices of the bitline sense amplifier via p-channel transistor 44 controlled by signal sp*. Similarly, VSS is provided to the n-channel devices of the bitline sense amplifier via n-channel transistor 46 controlled by sn. BL0 and BL0* are also provided to the column read access circuit 20, which includes n-channel series pull down transistors 26 and 28, 30 and 32 for each bitline pair. Transistors 26 and 28 are serially connected between DB* and logic low supply voltage VSS, while transistors 30 and 32 are serially connected between DB and VSS. The gate terminals of transistors 26 and 30 receive first stage column select signal YA0, and the gate terminals of transistors 28 and 32 are connected to BL0 and BL0* respectively. This circuit is well known in the art, and has been found to be a fast circuit for placing read data on VDD precharged DB/DB* lines. Databuses DB and DB* can be bi-directional read/write databuses or uni-directional read databuses. For the purposes of the following description, DB and DB* will be uni-directional read databuses. FIG. 2 also includes databus precharge circuit 22, consisting of a pair of p-channel transistors 34 and 36 connecting VDD to DB and DB* respectively in response to precharge control signal P_read.
Operation of the circuits shown in FIG. 2 is described with reference to the sequence diagram of FIG. 3. In FIG. 3, the transition arrows indicate the logic level of a signal in response to a triggering signal. While in standby, P_read is at the low level to keep DB and DB* precharged to the high logic level, as shown by transition arrow 50. It is noted that the high and low logic levels correspond to VDD and VSS for the present discussion. During a read operation, wordlines are driven (not shown) and a voltage differential is developed on the mid-level precharged bitlines BL0 and BL0* which are then driven to complementary levels by bitline sense amplifier 42. P_read is raised to the high logic level to disable p-channel transistors 34 and 36, and YA0 is raised to the high logic level. In response to the bitline voltage levels and YA0 at the high logic level, DB* drops to the low logic level at transition arrow 52 while DB remains at the precharged high logic level. After the read cycle is completed, P_read is dropped to the low logic level, and DB and DB* are precharged back to the high logic level as shown by transition arrow 54.
One leakage path in FIG. 2 is from VDD to VSS through p-channel transistor 44, through bitline sense amplifier 42 and n-channel transistor 46. The leakage current from this path becomes more significant as deep sub-micron process geometries decrease (to 90 nm, 65 nm, and 45 nm for example), and subthreshold currents increase.
Techniques are known in the art for overcoming the leakage current problem through the bitline sense amplifier circuits. One solution can be to overdrive the gates of transistors 46 and 44 in the off state in order to minimize their current leakage. However, this solution requires more complicated bitline sense amplifier control circuitry and requires high voltage devices with thicker oxides, and larger gate lengths. The additional process cost and area cost may not be acceptable. Another solution is to lower the internal supply voltage provided to the bitline sense amplifier 42 during a power down mode of operation. However, the disadvantage of supplying the bitline sense amplifier 42 with current from an on-chip regulator is that the sense current capability may be reduced, causing slower bitline operation, even in normal operating modes.
A second current leakage path in FIG. 2 occurs between VDD and VSS through transistor 34 connected between VDD and DB and the series connected transistors 30 and 32 between DB and VSS. Similarly, there is a leakage path from VDD to VSS through transistor 36 and transistors 26 and 28. Depending on the architecture and process, this leakage path can also be significant. Experimental results from a 2 Mb embedded DRAM macro in a 90 nm process showed that the current leakage from VDD to VSS through the databus precharge devices and read y-select devices could contribute about 40 mA to the standby current. Since multiple 2 Mb DRAM macros can be implemented in a system, the total aggregate current leakage can become unacceptable.
One solution for reducing current leakage in the DB and DB* path is to connect the source terminals of transistors 28 and 32 to drain terminal of transistor 46. However, this configuration results in a slower pull-down of the read databus and requires that the current source for the transistor 46 be large enough to both drive the bitline sense amplifier and pull-down the read databus.
Thus it is desirable to develop a memory architecture and corresponding memory array core circuits which minimize leakage currents in standby and deep power-down modes, and also minimize the leakage current of portions of the memory that are not being used, such that their contribution to the active power consumption is low.