Digital memory devices are essential elements of any digital data processor. Modern digital data processors generally use at least several different types of memory devices which have been developed to answer different performance requirements within various functional portions of the data processing system. For example, so-called hard drives are typically used for efficient long term storage of large amounts of data and programs but allow relatively rapid access thereto, usually in large blocks, even though such access generally requires a substantial number of clock cycles of the processor. Dynamic memories where data is stored as charge in a capacitor generally allow much faster access comparable to a smaller number of processor clock cycles and for smaller amounts of data which may be selectively addressed. However, dynamic memories must be periodically refreshed to compensate for charge that may leak from the capacitors. Such refresh operations may impose a longer (e.g. worst-case) access time. Nevertheless, dynamic random access memories are widely used since the simplicity of dynamic memory cells (e.g. only a single transistor and capacitor per memory cell in the array area) allows many millions of memory cells to be formed economically and reliably on a chip of moderate size,
Where selectively accessed data must be returned from storage in a substantially uniform and very small number of processor clock cycles, such as for cache memory, static memory structures are widely used and are referred to as static random access memories (SRAMs). SRAMs comprise many rows and columns of storage cells, each comprising a bistable circuit comprising at least two transistors and additional selection/pass transistors allowing addressing of individual memory cells. Bistable circuits do not require refreshing and can be switched from one bistable state to another at a speed limited only by the resistance and capacitance of the control electrodes of the transistors and their connections which determines the slew rate of the output voltage. Thus, in addition to the desirability of forming larger numbers of memory cells on a chip of reasonable size, there is substantial incentive toward reduction of memory cell size and increase of integration density to minimize resistance and capacitance of transistors and their connections in order to improve performance. Additionally, since SRAM response speed is critical, SRAMs are usually included in the integrated circuits which access them and can often occupy 50% or more of the chip area which, in turn, tends to limit the amount of other logic that can be provided unless bit-cell area/footprint is minimized.
Recently, SRAM cell designs have been developed using FinFETs in which the conduction channel is formed as a raised, fin-like structure, allowing the gate to be placed on two or more sides of the channel to improve conduction and leakage control even though there is, at the present state of the art, a small penalty in memory cell area for a given minimum lithographic feature size and bit-cell layout since more aggressive scaling is relatively more well-developed in regard to planar FET designs. In fact, FinFETs are considered to be relatively more scalable than planar FETs but have been used primarily for low standby and operating power applications where aggressive scaling is unnecessary or less critical.