Current advances in performance of microprocessors has outpaced the performance of DRAM. Because of this speed disparity, it is increasingly important to provide increasingly large amounts of cache memory on the microprocessor chip in order to meet the memory bandwidth requirements of contemporary applications. Static random access memory (SRAM) has historically been used for cache memory on processor chips because of its relative ease of process integration. However, because of the need for larger amounts of on-chip memory, the size of the SRAM cell has made its use less attractive. As SRAM memory occupies an increasingly larger percentage of chip area, it becomes a principal determinant of chip size, yield and cost per chip. Therefore, interest in using dynamic random access memory (DRAM) for on-chip cache memory is increasing, because of its high density, and low cost. However the integration of DRAM with CMOS logic involves increased process complexity because of the competing needs of high-performance low-threshold voltage (Vt) logic devices and low-leakage DRAM array devices. Additionally, DRAM cells require large storage capacitors, which are not provided by standard CMOS logic processes. Furthermore, the cost of providing these large DRAM storage capacitors in a CMOS logic process may be prohibitive for certain applications. As minimum feature size is reduced from generation to generation, it becomes increasingly difficult and costly to obtain the high storage capacitance for DRAM cells.
In view of the above, there is a need in the semiconductor industry to provide a dense, cost effective, replacement for SRAM caches integrated with high-performance logic.