A data storage (memory) circuit is a circuit that enables keeping/holding a certain amount of information within a physical place (cell), with the ability of later extraction/reading of the information. Moreover, one can generalize the above definition by treating the memory as an information container in physical space-time (rather than in physical space only). Then, it is more natural to understand some of the physical processes involved in data storage. The time dimension differs between some of the features of different types of memory circuits.
Information, in its modern understanding was realized in the first half of the 20th century, by the pioneering works of Nyquist & Shennon. The deep understanding of the relation to physics is the foundation of modern information & communication theory. Nyquist was the first one to understand the sampling issue, while Shannon was the first one to understand the relation of information to the physical term—“entropy” (that was realized first by Boltzmann at the end of the 19th century).
Conventionally, memory devices are digital circuits that stores data digitally in binary form. The number of cells in the memory device determines the number (n) of bits that can be stored in such a memory. The number of bits determines the number (S) of states that can be stored such that n=log2S.
It emerges from the above that every additional bit doubles the number of states, i.e. the capacity of the memory device. In the ever increasing drive to increase the storage capacity of memory devices, much effort has been invested in making the circuits themselves more compact and to improving methods of mass production. This has resulted in a dramatic increase in the performance/cost ratio of memory devices in recent years.
However, since the number of states of the memory (i.e. its storage capacity) is determined by the number of bits (i.e. cells in a digital memory), the storage capacities of known memory devices are limited by the number of cells. It is clearly desirable to increase the storage capacity of a memory without adding storage cells.
This desideratum is known per se. Thus, it is known to increase the density of flash memories using Multi-Level-Cell (M.L.C.) technology. This technology lowers the cost by enabling the storage of multiple bits of data per memory cell thereby reducing the consumption of silicon area. This technology is used by various manufacturers of flash memory such as the Intel StrataFlash memory as described by Greg Atwood et al. in the Intel Technology Journal Q4, 1997. As described therein, a flash memory device is a single transistor that includes an isolated floating gate. The floating gate is capable of storing electrons. The behavior of the transistor is altered depending on the amount of charge stored on the floating gate. Charge is placed on the floating gate through a technique called programming. The programming operation generates hot electrons in the channel region of the memory cell transistor. A fraction of these hot electrons gain enough energy to surmount the 3.2 eV barrier of the Si—SiO2 interface and become trapped on the floating gate. For single bit per cell devices, the transistor either has little charge (<5,000 electrons) on the floating gate and thus stores a “1” or it has a lot of charge (>30,000 electrons) on the floating gate and thus stores a “0.” When the memory cell is read, the presence or absence of charge is determined by sensing the change in the behavior of the memory transistor due to the stored charge. The stored charge is manifested as a change in the threshold voltage of the memory cell transistor.
It is thus known to compare a physical property associated with a memory cell with an external reference in order to associate multiple states with a single memory cell. It would be desirable to provide a method for increasing the storage capacity of a memory device having a given number of memory cells and a given number of levels of a physically limited external reference.