1. Field of the Invention
This invention relates to semiconductor integrated devices and, more particularly, to semiconductor memory devices providing increased memory speed, performance and robustness within a highly compact memory cell layout.
2. Description of the Related Art
The following descriptions and examples are not admitted to be prior art by virtue of their inclusion within this section.
Generally speaking, system-on-chip (SoC) technology is the ability to place multiple function “subsystems” on a single semiconductor chip. The term “system-on-chip” may be used to describe many of today's complex ASICs, where many functions previously achieved by combining multiple chips on a board are now provided by one single chip. SoC technology provides the advantages of cutting development cycle time, while increasing product functionality, performance and quality. The various types of subsystems that may be integrated within the semiconductor chip include microprocessor and micro-controller cores, digital signal processors (DSPs), memory blocks, communications cores, sound and video cores, radio frequency (RF) cells, power management, and high-speed interfaces, among others. In this manner, system-on-chip technology can be used to provide customized products for a variety of applications, including low-power, wireless, networking, consumer and high-speed applications.
There are various types of semiconductor memory, including Read Only Memory (ROM) and Random Access Memory (RAM). ROM is typically used where instructions or data must not be modified, while RAM is used to store instructions or data which must not only be read, but modified. ROM is a form of non-volatile storage—i.e., the information stored in ROM persists even after power is removed from the memory. On the other hand, RAM storage is generally volatile, and must remain powered-up in order to preserve its contents.
A conventional semiconductor memory device stores information digitally, in the form of bits (i.e., binary digits). The memory is typically organized as a matrix of memory cells, each of which is capable of storing one bit. The cells of the memory matrix are accessed by wordlines and bitlines. Wordlines are usually associated with the rows of the memory matrix, and bitlines with the columns. Raising a wordline activates a given row; the bitlines are then used to read from, or write to, the corresponding cells in the currently active row. Memory cells are typically capable of assuming one of two voltage states (commonly described as “on” or “off”). Information is stored in the memory by setting each cell in the appropriate logic state. For example, to store a bit having a value of 1 in a particular cell, one would set the state of that cell to “on;” similarly, a value of 0 would be stored by setting the cell to the “off” state. (Obviously, the association of “on” with 1 and “off” with 0 is arbitrary, and could be reversed.)
The two major types of semiconductor RAM, Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM), differ in the manner by which their cells represent the state of a bit. In an SRAM, each memory cell includes transistor-based circuitry that implements a bi-stable latch. A bi-stable latch relies on transistor gain and positive (i.e. reinforcing) feedback to guarantee that it can only assume one of two states—“on” or “off.” The latch is stable in either state (hence, the term “bi-stable”). It can be induced to change from one state to the other only through the application of an external stimulus; left undisturbed, it will remain in its original state indefinitely. This is just the sort of operation required for a memory circuit, since once a bit value has been written to the memory cell, it will be retained until it is deliberately changed.
In contrast to the SRAM, the memory cells of a DRAM employ a capacitor to store the “on”/“off” voltage state representing the bit. A transistor-based buffer drives the capacitor. The buffer quickly charges or discharges the capacitor to change the state of the memory cell, and is then disconnected. Ideally, the capacitor then holds the charge placed on it by the buffer and retains the stored voltage level.
DRAMs have at least two drawbacks compared to SRAMs. The first of these is that leakage currents within the semiconductor memory are unavoidable, and act to limit the length of time the memory cell capacitors can hold their charge. Consequently, DRAMs typically require a periodic refresh cycle to restore sagging capacitor voltage levels. Otherwise, the capacitive memory cells would not maintain their contents. Secondly, changing the state of a DRAM memory cell requires charging or discharging the cell capacitor. The time required to do this depends on the amount of current the transistor-based buffer can source or sink, but generally cannot be done as quickly as a bi-stable latch can change state. Therefore, DRAMs are typically slower than SRAMs. However, DRAMs tend to offset these disadvantages by offering higher memory cell densities, since the capacitive memory cells are intrinsically smaller than the transistor-based cells of an SRAM.
As SoC technology becomes more sophisticated, greater density, speed and performance are demanded from memory devices embedded thereon. For this reason, SRAM devices—rather than DRAM devices—are typically used in applications where speed is of primary importance, such as in communication and networking SoC applications (e.g., routers, switches and other traffic control applications). The SRAM devices most commonly used for communication and networking SoC applications are single-port devices (FIG. 1) and dual-port devices (FIG. 2), and in some cases, two-port devices (not shown).
FIG. 1 is a circuit diagram of a typical single-port SRAM memory cell 100. In general, memory cell 100 includes six transistors and uses one bi-directional port for accessing the storage element. As shown in FIG. 1, memory cell 100 utilizes a minimum of five connections; one wordline (WL) for accessing the port, two bitlines (BL/BLB) for storing the data and data complement within the storage element, one power supply line (VDD) and one ground supply line (VSS) for powering the storage element and holding the data. The storage element, or bi-stable latch, of memory cell 100 may be implemented with cross-coupled P-channel load transistors (T1P and T2P) and N-channel latch transistors (T1N and T2N). In an alternative embodiment, however, resistive load devices may be used in place of the P-channel load transistors, as is known in the art. A pair of N-channel access transistors (T3 and T4) provide access to the storage nodes (SN/SNB) of the bi-stable latch.
In some cases, memory cell 100 may be accessed by applying a positive voltage to the wordline (often referred to as “raising the wordline”), which activates access transistors T3 and T4. This may enable one of the two bitlines (BL/BLB) to sense the contents of the memory cell based on the voltages present at the storage nodes. For example, if storage node SN is at a high voltage (e.g., a power supply voltage, VDD) and node SNB is at a low voltage (e.g., a ground potential, VSS) when the wordline is raised, latch transistor T2N and access transistor T4 are activated to pull the bitline complement (BLB) down toward the ground potential. At the same time, the bitline (BL) is pulled up towards the power supply voltage by activation of latch transistor T1P and access transistor T3. In this manner, the state of the memory cell (either a “1” or “0”) can be determined (or “read”) by sensing the potential difference between bitlines BL and BLB. Conversely, writing a “1” or “0” into the memory cell can be accomplished by forcing the bitline or bitline complement to either VDD or VSS and then raising the wordline. The potentials placed on the pair of bitlines will be transferred to respective storage nodes, thereby forcing the cell into either a “1” or “0” state.
Some SoC applications benefit from the use of dual-port or two-port memory devices, which allow two independent devices (e.g., a processor and micro-controller, or two different processors) to have simultaneous read and/or write access to memory cells within the same row or column. Dual-port and two-port memory devices are essentially identical in form, and as such, can both be described in reference to FIG. 2. However, dual-port and two-port memory devices differ in function. Where both ports are used for read and write operations in a dual-port cell, one port of a two-port cell is strictly used for a write operation, while the other port of a two-port cell is strictly used for a read operation.
FIG. 2 is a circuit diagram of a typical dual-port SRAM memory cell 200, which utilizes a pair of bi-directional ports—referred to as “port A” and “port B”—for accessing the storage element. As shown in FIG. 2, memory cell 200 utilizes eight connections, including one wordline (WLA/WLB) for accessing each port and two pairs of bitlines (BLA/BLBA and BLB/BLBB) for reading/writing to the nodes of the storage element, as well as, a power supply line (VDD) and ground supply line (VSS). In addition to the six transistors described above for single-port memory cell 100, a second pair of N-channel access transistors (T5 and T6) are included within dual-port memory cell 200 for accessing storage nodes SN and SNB via the additional port.
Like most semiconductor devices, SRAM devices are typically fabricated en masse on semiconductor wafers over numerous processing steps. For example, an SRAM device may be fabricated as a metal-oxide-semiconductor (MOS) integrated circuit, in which a gate dielectric, typically formed from silicon dioxide (or “oxide”), is formed on a semiconductor substrate that is doped with either n-type or p-type impurities. Conductive regions and layers of the device may also be isolated from one another by an interlevel dielectric. For each MOS field effect transistor (MOSFET) within the SRAM device, a gate conductor is formed over the gate dielectric, and dopant impurities are introduced into the substrate to form “source” and “drain” regions. Frequently, the integrated circuit will employ a conductive layer to provide a local interconnect function between the transistors and other components of the device, such as overlying bitlines, wordlines, power and ground supply lines.
A pervasive trend in modern integrated circuit manufacture is to produce transistors that are as fast as possible, and thus, have feature sizes as small as possible. Many modern day processes employ features, such as gate conductors and interconnects, which have less than 1.0 μm critical dimension. As feature sizes decrease, sizes of the resulting transistor and interconnects between transistors decrease. Fabrication of smaller transistors may allow more transistors to be placed on a single monolithic substrate, thereby allowing relatively large circuit systems to be incorporated onto a single, relatively small semiconductor chip.
As transistor feature sizes continue to decrease with advancements in manufacturing processes, greater amounts of memory may be incorporated onto the chip without increasing the chip area. This may be especially advantageous in many SoC applications, where the demand for on-chip memory is expected to increase from about 50% to about 90% of the total chip area. In an effort to effectively utilize chip area, many SoC designs divide the memory device into numerous memory blocks, which are then embedded at various locations within the chip, rather than concentrated in one large memory unit. Unfortunately, many of these SoC designs suffer from data corruption, which may be caused by stray capacitances from chip-level signals routed over the memory blocks. Though strict routing restrictions may be imposed to avoid data corruption, such restrictions often lead to chip-level routing congestion and undesirable increases in overall chip area. Therefore, a need exists for an improved memory cell architecture, which significantly decreases memory device area and chip-level routing congestion, while maintaining performance and speed specifications for next-generation SoC applications.