1. Field of the Invention
The present invention relates to solid-state memories. In particular, the present invention relates to a three-dimensional (3-D) arrangement of memory cells forming an ultra-low-cost solid-state memory.
2. Description of the Related Art
FIG. 1 shows a table setting forth estimated scaling limits, estimated performance characteristics and estimated costs for current and potential solid-state memory technologies, as projected for the year 2020. Revenue estimates for 2002 are given or are indicated as DEV for technologies that are still under development and RES for technologies in the research stage. Important factors influencing the cost per bit for the solid-state memories shown in FIG. 1 include the scalability to the smallest dimensions, the number of bits per cell, and the cost of three-dimensional (3-D) integration.
The scaling limits indicated for each solid-state technology are speculative and are based primarily on physical limits rather than current technical challenges. The cost of processing a unit area of silicon has remained fairly constant over the years, and has historically been about ten times higher than the cost per unit area for low-cost 3.5″ Hard Disk Drives (HDDs). It has been estimated that use of 300 mm wafers will lower the cost per unit area by about 30%. Nevertheless, current desk-top HDDs are about 100 times cheaper per bit than DRAM and FLASH memories because HDDs have an areal density that is about ten times greater than DRAM or FLASH memories. For the memory technologies identified in FIG. 1 that are capable of low-cost 3-D integration, it is assumed that layers are added until the cost per unit area increases by 60%, thereby providing a good trade-off between lower cost and manufacturability.
Four technologies may eventually reach a cost that is comparable to that of HDDs through either multi-bit storage or 3-D integration, which are two characteristics that HDDs cannot practically possess. Two of the four technologies, PROBE memories and MATRIX memories, are likely to have performance characteristics that are inferior to HDDs. The other two technologies, Ovonic Universal Memory (OUM) and zero-transistor ferroelectric memory (0T-FeRAM), are likely to have superior performance to HDDs and are potential replacement technologies for HDDs. Even if a high-performance memory is twice as expensive as HDDs, it may still be widely preferable because large amounts of DRAM (or other memory) would not be required for buffering the processor.
The scaling limits and associated cost estimates for the various memory technologies shown in FIG. 1 are described below:
SRAM
Static Random Access Memory (SRAM) cell is formed by six MOSFETs, so scaling challenges are the same as for transistors and wires. The most scalable MOSFET design is generally believed to be the double-gate transistor. See, for example, J. Wang et al., “Does Source-to-Drain Tunneling Limit the Ultimate Scaling of MOSFETs?” IEDM Tech. Digest (IEEE), p. 707 (2002). Because the gates must be insulated from the channel and the insulation must be thicker than about 2 nm to prevent excessive gate tunneling current, the gates must be separated by at least 4 nm plus the thickness of the channel. Roughly speaking, the channel length must be at least as long as the gate-to-gate distance for the transistor to properly turn off, even when a high-k dielectric insulation is used. Consequently, the smallest workable transistor is on the order of 5 to 6 nm in length.
Today, gate lengths are about 65 nm using lithography that is capable of 130 nm half-pitch between wires so the smallest transistors would be appropriate for the 11 nm node in about 2020. See, for example, http://public.itrs.net. Extremely advanced lithography will be required for 11 nm half-pitch node. The minimum half-pitch for Extreme-UV (EUV) lithography at a wavelength of 11 or 13 nm is given by F=k1λ/NA, in which k1 is a constant having a minimum value of about 0.25 using phase shift masks, λ is the wavelength and NA is the numerical aperture having a maximum value of about 0.55 for the reflective optics that are used for EUV lithography. See, for example, U.S. Pat. No. 5,815,310 to D. M. Williamson entitled “High Numerical Aperture Ring Field Optical Reduction System.” Although these particular parameters indicate that the lithography limit is about 5 nm half-pitch, it is unlikely this limit will be reached.
If the more conservative parameter values are considered, i.e., k1=NA=0.4, then the limit is at the 11 nm node. If transistor gate lengths must be somewhat longer than 6 nm, memory density will not be very adversely affected because the cell size is determined more by the wire pitch than by gate length.
The minimum cell size for SRAM is large at about 50 F2, consequently, the maximum density for F=11 nm is about 0.1 Tb/in2. It is expected that SRAM will continue to be used in the future for applications in which speed is most important because SRAM is the fastest performing memory type for both reading and writing.
DRAM
A Dynamic Random Access Memory (DRAM) cell is formed by a MOSFET and a capacitor. The voltage stored on the capacitor must be refreshed about every 0.1 s due to leakage. DRAM memory has very serious scaling challenges. See, for example, J. A. Mandelman et al., “Challenges and Future Directions for the Scaling of Dynamic Random-Access Memory (DRAM),” IBM Journal of Research and Development, vol. 46, p. 187 (2002). For example, one of the most serious scaling obstacles for DRAM memory results from the adverse effects of radiation in which a single alpha particle can create about 1 million minority carriers that sometimes end up on the capacitor. For immunity from the effects of radiation, the capacitor must hold more than 1 million electrons, corresponding to a capacitance of about 30 fF. See, for example, A. F. Tasch et al., “Memory Cell and Technology Issues for 64 and 256-Mbit One-Transistor Cell MOS DRAMs,” Proceedings of the IEEE, vol. 77, p. 374 (1989).
In DRAM, reading the state of the capacitor is destructive, so the data must be rewritten afterward. With conventional architecture, the state of the capacitor is sensed by discharging the capacitor to a bit line having a capacitance that is much greater than 30 fF. Further reductions in storage capacitance would lower the sense voltage to levels that are not easily detectable. Because the capacitance cannot be readily scaled, the capacitor has presently taken the form of a cylinder extending deep into the silicon wafer and having an aspect ratio of about 50 to 1. An aspect ratio of this magnitude does not appear to be capable of being increased much further and soon capacitors will need to flare out under the silicon surface taking the shape of, for example, a bottle. Also, high-k dielectrics, such as barium strontium titanate (BST), will be needed for improving performance of the capacitor. Unfortunately, high-k dielectrics have a high leakage and, therefore, need a thickness that is thicker than that of the dielectric materials that are used today. Accordingly, the thickness of high-k dielectrics can add considerably to the diameter of nanometer-scale capacitors. With such scaling obstacles, it seems unlikely that DRAM will scale to be smaller than about 30 nm.
HDDs
Historically, Hard Disk Drives (HDDs) have about ten times greater data density than DRAM or FLASH memories because there is little or no space between bits and data tracks. Additionally, bit density along the track is determined primarily by field gradient and head fly-height rather than by a minimum lithographic dimension. Only track density is determined by lithography. The areal density advantage of HDDs, however, is likely to decrease due to the superparamagnetic limit in which scaling of magnetic grain size in the disk is no longer possible because thermal energy kBT begins to compete with the magnetic anisotropy energy KuV. For written data to be thermally stable for a period of several years (at about 330 K), the minimum size of a magnetic grain is limited to approximately 8 nm.
Although materials exist having a minimum stable size of approximately 3 nm, the coercivity of these materials is higher than the maximum attainable field that can be produced by a write head. About 10-20 grains will be needed per bit to prevent excessive error correction from reducing the data density because the grains are randomly oriented. See, for example, R. Wood, “Recording Technologies for Terabit per Square Inch Systems,” IEEE Transactions of Magnetics, vol. 38, p. 1711, 2002, and M. Mallary et al., “One Terabit per Square Inch Perpendicular Recording Conceptual Design,” IEEE Transactions of Magnetics, vol. 38, p. 1719, 2002.
Although it is generally accepted that the areal density limit for conventional recording is about 1 Tb/in2, it may be possible to use a revolutionary technology, such as thermally-assisted recording in which the disk is heated to lower the coercivity of the media for writing. Nevertheless, there is a limit when thermal energy kBT begins to compete with the Zeeman energy 2HAMsV, in which HA is the applied field, so that the grains are not properly oriented during writing. This effect limits the grain size to about 4 nm, which is a factor of two smaller than the grain size used for conventional recording. Unfortunately, there is no known practical way to make a nanometer-scale heat spot on the disk.
Patterned media has also been proposed as a way to surpass 1 Tb/in2. An e-beam master is used to stamp a pattern into the disk to form magnetic islands so that there can be only 1 grain per bit. Unfortunately, e-beam lithography resolution is limited by secondary electrons exposing the resist, thereby making it currently impossible to surpass 1 Tb/in2. See, for example, S. Yasin et al., “Comparison of MIBK/IPA and Water/IPA as PMMA Developers for Electron Beam Nanolithography,” Microelectronic Engineering, vol. 61-62, p. 745, 2002. FIG. 1 indicates the density limit for HDDs to be 1 Tb/in2, which may be reached as early as the year 2010.
FLASH
FLASH memory technology uses a single floating-gate transistor per cell. Typically, FLASH memory is used when an HDD is too bulky. FLASH memory has a fast read time, a relatively slow write time, a low data rate and low endurance. The cost of FLASH memories, however, is rapidly dropping and is expected to be the fastest growing memory type over the next few years, especially for NAND and AND-type FLASH memory architectures. For small capacities, the cost of FLASH memory is currently cheaper than HDDs because HDDs cannot cost much less than $50 based on fixed costs. Today, FLASH memory prices are cut in half every year due to aggressive scaling and the recent introduction of two-bits-per-cell technology. Four-bits-per-cell technology is expected to be available within a few years.
Although multi-bit storage techniques reduce estimated cost dramatically, multi-bit storage typically leads to lower performance because the read/write process is more complicated. The capability of FLASH memory to store multiple-bits per cell is based on the ability of the floating gate to store a very large number of electrons, thereby varying transistor conductance over many orders of magnitude. Therefore, FLASH memory has very fine granularity and low noise with today's technology.
FLASH memory, however, has very serious scaling challenges because the dielectric around the floating gate must be at least 8 nm thick to retain charge for ten years. See, for example, A. Fazio et al., “ETOX Flash Memory Technology: Scaling and Integration Challenges,” Intel Technology Journal, vol. 6, p. 23, 2002. This thickness is four times thicker than the thickness of the gate dielectric used in SRAM. Also, the voltage used for programming FLASH memories must be greater than about 8 volts, making it difficult to scale the peripheral transistors that are used to supply the programming voltage.
NOR FLASH memory is not believed to be scalable past the 65 nm node due to problems with drain-induced barrier lowering during programming at this length scale. See, A. Fazio et al., supra. Similarly, NAND FLASH memory is projected to have very serious scaling challenges below 40 nm due to interference between adjacent gates, particularly for multi-bit storage. See, for example, J.-D. Lee et al., “Effects of Floating-Gate Interference on NAND Flash Memory Cell Operation,” IEEE Electron Device Letters, vol. 23, p. 264, 2002.
Scaling projections for NAND FLASH memory, which are shown in FIG. 1, are based on the assumption that further improvements will scale NAND or NROM FLASH memory to about 30 nm half-pitch using four-bits-per-cell technology. Below this size, the small number of electrons per bit, the size of the high voltage circuits, and interference between charge storage regions will likely cause obstacles too significant for further scaling.
PROBE
Probe memory technology primarily refers to the “Millipede” concept for data storage being pursued by IBM in which a 2-D array of silicon cantilevers having very sharp silicon tips are scanned over a thin polymer film on a silicon substrate and heated for poking holes in the polymer. See, for example, P. Vettiger et al., “The Millipede—Nanotechnology Entering Data Storage,” IEEE Transactions of Nanotechnology, vol. 1, p. 39, 2002. Bits are detected by sensing the cooling of the cantilever when the tips dip down into the holes. Access times are about as long as for an HDD because the entire chip must be moved relative to the tip array to reach the desired memory address. Data rates are quite low compared to HDDs. That is, it will take a row of 400 cantilevers in a 160,000 cantilever array operating at about 100 KHz each to achieve a data rate of 4 MB/s. If this data rate can be achieved, PROBE memory would be competitive with FLASH and the 1″ Microdrive.
Power dissipation, however, is very high for both reading and writing because micron-scale heaters are used at temperatures of up to 400 C dissipating about 5 mW each. Consequently, a 4 MB/s data rate would require 2 W of dissipation, thereby making PROBE storage two times less energy efficient per bit than the Microdrive and at least 20 times less efficient than FLASH memory. PROBE storage is inherently 2-D in nature and is not likely to be capable of multi-bit storage due to noise and other issues, although in theory there could be three layers of polymer with different glass transition temperatures to vary the depth with applied temperature and store 2 bits per indent.
So far, the estimated cost per unit area is uncertain, but is likely to be at least as expensive as other solid-state memories because two silicon wafers are used in a precise sandwich arrangement and a substantial amount of peripheral control circuitry is needed. Alignment and thermal drift are a major problem and it is likely that a number of thermal sensors and compensation heaters will be needed to keep top and bottom wafers isothermal and to within one degree of each other. Tip wear and polymer durability are other major issues.
PROBE storage, however, has a major advantage in that bit size is determined by tip sharpness rather than by lithography. Also, because the polymer is amorphous, grain size limitations do not occur. In that regard, IBM has demonstrated an areal density of 1 Tb/in2 using silicon tips. Improvements in tip technology might make it possible to improve the density significantly. Local oxidation storage at >1 Tb/in2 has been demonstrated with a nanotube tip. See, for example, E. B. Cooper et al., “Terabit-Per-Square-Inch Data Storage with the Atomic Force Microscope,” Applied Physics Letters, vol. 75, p. 3566, 1999. If a manufacturable method of forming ultra-sharp, durable tips can be developed, perhaps 10 Tb/in2 is possible. See for example, E. Yenilmez et al., “Wafer Scale Production of Carbon Nanotube Scanning Probe Tips for Atomic Force Microscopy,” Applied Physics Letters, vol. 80, p. 2225, 2002.
OUM
Another emerging memory technology is known as Ovonic Universal Memory (OUM). See, for example, M. Gill et al., “Ovonic Unified Memory—a High-Performance Nonvolatile Memory Technology for Stand-Alone Memory and Embedded Applications,” ISSCC Tech. Digest (IEEE), p. 202, 2002. OUM uses one programmable resistor and one diode (or transistor) per cell. The high and low resistance states of a phase-change resistor (amorphous versus crystalline) is used for storing bits. OUM writing is accomplished by passing high current through the resistor to bring the material to the crystallization temperature or melting temperature (about 400 to 600 C). Rapid cooling of the melted material results in the amorphous (high resistance) phase. Writing the crystalline phase requires a longer time for nucleation and growth to occur (about 50 ns) and results in about 100 times lower resistance than in the amorphous phase.
Intermediate values of resistance can be set by controlling the current (and, therefore, temperature) during the programming pulse, thereby making multi-bit storage possible with OUM, but likely to be more difficult than for FLASH memory because the phase-change resistors cannot be accessed directly like the transistors in a FLASH memory. Direct access is not possible when a diode is used to prevent multiple current paths through the cells. A series diode effectively reduces the change in resistance from a factor of 100 to only about a factor of two. FIG. 1 indicates that two-bits-per-cell technology will be possible with OUM.
OUM is scalable because the resistance is determined by the position of the amorphous-crystalline boundary and has atomic-scale granularity. Although the phase-change material must be heated to very high temperature, the small programming volume results in reasonable power dissipation. OUM has a scaling problem in that power per unit area and current density scale inversely with size at constant peak temperature because the temperature gradient scales inversely with size. It is expected that current density will need to be in excess of 107 A/cm2 to heat up a volume that is 10 nm across to 600 C, even with excellent thermal isolation.
Nanoscale copper wires are known to have an electromigration time to failure of a few years at this current density and will quickly be destroyed at 108 A/cm2. See, for example, G. Steinlesberger et al., “Copper Damascene Interconnects for the 65 nm Technology Node: A First Look at the Reliability Properties,” IEEE Interconnect Technology Conference Proceedings, p. 265, 2002. Problems of electromigration can probably be avoided by using interconnects having a tall aspect ratio, although local electromigration near the devices could still be a significant problem.
Another issue that may be associated with OUM is the need for bulky transistors for driving large current densities, even though a short-channel length will help alleviate this potential problem. The need for large current density and a diode at each cell for preventing multiple current paths when accessing a cell will make 3-D integration of OUM quite difficult. Polycrystalline silicon diodes fail quickly at current densities of about 106 A/cm2. See, for example, O.-H. Kim et al., “Effects of High-Current Pulses on Polycrystalline Silicon Diode with N-Type Region Heavily Doped with both Boron and Phosphorus,” Journal of Applied Physics, vol. 53, p. 5359, 1982. In particular, polycrystalline silicon diodes are only reliable below current densities of about 105 A/cm2. See, for example, U.S. Pat. No. 6,429,449 to F. Gonzalez et al., entitled “Three-Dimensional Container Diode for Use with Multi-State Material in a Non-Volatile Memory Cell”.
A diode surface area 100 times larger than the area of the resistor would be required if polycrystalline silicon were used. Additionally, a large number of processing steps would be required to make a tall cylindrically-shaped diode. See, for example, U.S. Pat. No. 6,429,449 to F. Gonzalez et al. Very tall diodes would mean very high aspect ratios for the diodes and for the vias between layers. Even if very large grain size is achieved with a planar diode, a single grain boundary or intra-grain defect can cause a device to fail given the current density needed to write OUM memory.
Wafer bonding techniques used to make Silicon-On-Insulator (SOI) can be used to form diodes in multiple layers if single crystal silicon must be used. See, for example, K. W. Guarini et al., “Electrical Integrity of State-of-the-Art 0.13 μm SOI CMOS Devices and Circuits Transferred for Three-Dimensional (3D) Integrated Circuit (IC) Fabrication,” IEDM Tech. Digest (IEEE), p. 943, 2002. To keep costs down, it is advantageous to bond a very thin layer of silicon while reusing the host wafer. One process that appears suitable for making 3-D ICs with single crystal silicon is based on the ELTRAN method that has been developed by Canon. See, for example, K. Sakagushi et al., “Current Progress in Epitaxial Layer Transfer (ELTRAN),” IEICE. Trans. Electron., vol. E80C, p. 378, 1997. According to the ELTRAN method, a host wafer is etched to form a porous layer having very small holes at the surface and large cavities much further down. Epitaxial silicon then bridges the holes to form a new, very high quality surface layer that may undergo the high temperature (>600 C) processing that is needed to form diodes or transistors.
Subsequent steps can be carried out at lower temperature (<600 C) to prevent damage to the 3-D chip. The epitaxial layer is bonded to the 3-D chip and cleaved along the weak porous layer. Alternatively, the epitaxial layer is bonded to a transparent transfer wafer, cleaved, and then transferred to the chip. Etching and chemical-mechanical polishing (CMP) is used for resurfacing the two cleaved planes and the host wafer is reused. Low temperature processing, such as making phase-change resistors, can be performed on the 3-D chip before the next silicon layer is added. The advantage of OUM memory over other similar schemes based on a field-programmable resistor is that current passes in only one direction through the resistor so a diode can be used instead of a transistor for access, thereby reducing the size of the cell and the number of processing steps for each silicon layer. Even though the cost of single crystal silicon is high, 3-D integration should reduce cost more for OUM than for technologies that require a single crystal MOSFET in each cell.
Roughly estimated costs associated with OUM include about $5000 for processing a 300 mm wafer into chips, yielding up to 1000 dies of 70 mm2, each costing about $5. EUV lithography is expected to be expensive at $40 per mask step. See, for example, http://www.sematech.org/public/resources/litho/coo/index.htm. Assuming five masks per layer and three layers, $600 is added to the estimated cost of the wafer. Today SOI wafers are very expensive at over $1000 each with the cost projected to drop to $700 over the next few years. If the cost can continue to drop to about $600, it is projected that three additional layers of silicon will cost about $1800 per 3-D wafer. If another $600 is budgeted for additional processing steps, costs for the masks and costs for testing, the total cost increases 60 percent, but the memory density increases by a factor of four, assuming the bottom layer also has memory cells. According to FIG. 1, OUM may eventually reach an estimated cost that is close to that estimated for HDDs when (1) less expensive SOI techniques (by today's standards) can be used for 3-D integration, (2) multiple bits can be stored per cell, and (3) lithography can be extended down to 10 mm.
MJT-MA and 3D-MRAM
Magnetic Random Access Memory (MRAM) uses one magnetic tunnel junction and one diode (or MOSFET) per cell. The high and low resistance states of a MTJ (i.e., parallel versus anti-parallel magnetic electrodes) are used for storing bits. See, for example, K. Inomata, “Present and Future of Magnetic RAM Technology,” IEICE. Trans. Electron., vol. E84-C, p. 740, 2001. Magnetic Tunnel Junction MRAM (MTJ-MRAM) writing is accomplished by passing current through word and bit lines to create a magnetic field that is sufficiently strong to switch the “soft” or “free” magnetic electrode at the cross point of the word and bit lines.
It would be difficult to store more than one bit per cell in MRAM due to the squareness of the MTJ hysteresis loop. One possibility for overcoming this difficulty would be to connect three MTJs in series with each MTJ having a different threshold to store two bits. The complexity and cost of connecting three devices in series for storing twice as much information needs further consideration. For that reason FIG. 1 indicates that MRAM can have only one bit per cell.
A significant obstacle associated with MTJ-MRAM is that the current density necessary for producing a write field scales poorly as the wires are made smaller. The poor scaling is related to a necessary increase in the coercivity of the soft electrode to avoid the superparamagnetism effect. For example, to scale to the 40 nm node, a cube-shaped magnetic bit needs an anisotropy energy of Ku=50 kBT/V=3.5×104 ergs/cm3. Assuming a magnetization of 1000 emu/cm3, the anisotropy field would need to be Hk=2Ku/M=70 Oe. Using the Stoner-Wohlfarth model of magnetic reversal, Hk can be taken to be approximately equal to the field necessary for fast switching. For 40 nm×40 nm bit and word wires (at 45 angular degrees to the magnetic axis) to produce a magnetic field of 70 Oe 40 nm from the wire centers, the current density would need to be at least j=(5/21/2)Hk/d=6×107 A/cm2. As discussed above, copper wires will fail after a few years at only 1×107 A/cm2, so scaling MTJ-MRAM even to 40 nm will require large improvements in the electromigration resistance of copper wires. Consequently, the cost of MRAM will remain quite high in comparison to more scalable technologies.
MRAM does have one interesting advantage for low cost: high current does not need to pass through the cell because writing is accomplished with magnetic fields. During a read operation, a diode may be needed for preventing multiple current paths in the cross-point architecture, but the diode can be made from thin film amorphous silicon. See, for example, P. P. Freitas et al., “Spin Dependent Tunnel Junctions for Memory and Read-Head Applications,” IEEE Transactions of Magnetics, vol. 36, p. 2796, 2000. Although a thin film amorphous silicon diode is much cheaper than a single crystal silicon diode, the maximum current density through amorphous silicon is only 101 A/cm2. Accordingly, the very high resistance of a thin film amorphous silicon diode leads to long RC time constants and very low performance.
Cost estimates associated with MJT-MRAMs may be reduced dramatically with 3-D integration. For example, assuming three masks per layer and twelve layers, lithography cost will increase by $1440 per wafer. If an additional $1560 is allowed for other expenses, cost increases by 60 percent, but density increases by a factor of 12. Nevertheless, despite good 3-D potential, MRAM has poor scaling and does not appear competitive with other storage methods.
MATRIX
A MATRIX memory cell has one anti-fuse and one poly-crystalline silicon diode. See, for example, T. H. Lee, “A Vertical Leap for Microchips,” Scientific American, vol. 286, p. 52, 2002. MATRIX memory should have 3-D integration costs that are similar to 3-D MRAM with the advantage of being much more scalable. MATRIX memory, currently in development by Matrix Semiconductor, is the most advanced concept for 3-D solid-state memory having chips nearing production and being considered for commercial use. The primary disadvantages of Matrix memory are: (1) the memory is write-once because it is based on destructive breakdown of an insulator, and (2) the memory has low performance because a poly-crystalline silicon diode is used.
1T-FeRAM
A 1T-FeRAM memory cell consists of one MOSFET and one ferroelectric capacitor having a hysteresis loop similar to the exemplary hysteresis loop 200 shown in FIG. 2. 1T-FeRAM memory is very similar to DRAM except that the capacitor dielectric is replaced by ferroelectric material and a slightly different architecture is used. See, for example, O. Auciello et al., “The Physics of Ferroelectric Memories,” Physics Today, vol. 51, p. 22, 1998. Use of the ferroelectric material in place of a dielectric material has several advantages, such as (1) the capacitor is non-volatile and does not need to be refreshed, (2) the capacitor can store about 100 times more charge in the same amount of space, and (3) the capacitor is radiation hardened because the polarization of the ferroelectric is not easily affected by radiation.
The quality of having radiation-hardness allows the charge limit associated with a 1T-FeRAM memory cell to be reduced below a million electrons when the sensing method is changed so that current is detected or a gain cell is used. See, for example, D. Takashima, “Overview and Trend of Chain FeRAM Architecture,” IEICE. Trans. Electron., vol. E84-C, p. 747, 2001. Consequently, 1T-FeRAM does not suffer from the scaling problems associated with DRAM memory. Even though a ferroelectric material is polycrystalline, it should be capable of scaling to 10 nm. In that regard, it has been calculated that ferroelectric grains of Pb(Zr, Ti)O3 (PZT) as small as 2.5 nm are thermally stable. See, for example, T. Yamamoto, “Calculated Size Dependence of Ferroelectric Properties in PbZrO3—PbTiO3 System,” Integrated Ferroelectrics, vol. 12, p. 161, 1996. Additionally, ferroelectric PZT films as thin as 4 nm have been grown. See, for example, T. Tybell et al., “Ferroelectricity in Thin Perovskite Films,” Applied Physics Letters, vol. 75, p. 856, 1999. Moreover, low-leakage polycrystalline ferroelectric capacitors as thin as 13 nm have been formed. See, for example, T. Kijima et al., “Si-Substituted Ultrathin Ferroelectric Films,” Jpn. J. Appl. Phys., vol. 41, p. L716, 2002. Lastly, lateral ferroelectric domains as small as 6 nm have been switched with scanned probes. See, for example, Y. Cho et al., “Tbit/inch2 Ferroelectric Data Storage Based on Scanning Nonlinear Dielectric Microscopy,” Applied Physics Letters, vol. 81, p. 4401, 2002.
If the number of grains or domain wall pinning sites is sufficiently large in a single capacitor, it should be possible to store two or more bits per cell, but this should be difficult because the intermediate state of the cell cannot be verified without destroying the state. For that reason, FIG. 1 indicates that 1T and 0T-FeRAM can scale to 10 nm, but will be limited to only one bit per cell.
Thus, 1T-FeRAM appears to have a good chance of replacing DRAM because of it has similar performance and better scalability. The need for higher dielectric constants has already caused the DRAM industry to extensively investigate perovskite materials.
What is needed is a high-performance non-volatile solid-state memory that scales well and allows for low-cost 3-D integration.