A content addressable memory (CAM) device is a storage device that is particularly suitable for matching functions because it can be instructed to compare a specific pattern of comparand data with data stored in an associative CAM array. A CAM, also referred to as an associative memory, can include a number of data storage locations, each of which can be accessed by a corresponding address. Functionality of a CAM depends at least in part on whether the CAM includes binary or ternary CAM cells.
Typical binary CAM cells are able to store two states of information, a logic one state and a logic zero state. Binary CAM cells typically include a random access memory (RAM) cell and a compare circuit. The compare circuit compares the comparand data with data stored in the RAM cell and provides the match result to a match line. Columns of binary CAM cells may be globally masked by mask data stored in one or more global mask registers. Ternary CAM cells are mask-per-bit CAM cells that effectively store three states of information, namely a logic one state, a logic zero state, and a don't care state for compare operations. Ternary CAM cells typically include a second RAM cell that stores local mask data for the ternary CAM cell. The local mask data masks the comparison result of the comparand data with the data stored in the first RAM cell such that, when the mask bit has a first predetermined value (a logic low, for example) its compare operation will be masked so that the comparison result does not affect the match line (e.g., always appears to match). The ternary CAM cell offers more flexibility to the user to determine on an entry-per-entry basis which bits in a word will be masked during a compare operation.
Many typical CAM devices use static memory technology. However, dynamic memory technology including dynamic random access memory (DRAM) devices is also being used because it can provide relatively denser and, therefore, larger memory arrays on the same size chip as similar arrays using static memory technology. The efficient search capabilities of CAM devices have proven useful in many applications including address filtering and lookups in routers and networking equipment, for example, and pattern recognition for encryption and/or decryption and compression and/or decompression applications, for example, as well as other pattern recognition applications.
FIG. 1 illustrates an embodiment of a binary DRAM CAM (DCAM) cell 100. The DCAM cell 100 comprises a storage element 110 and a compare circuit 120. The storage element 110 is a DRAM cell including a first transistor 112 and capacitor 114 combination coupled to store a data bit (i.e. a logical one or zero), and a second transistor 116 and capacitor 118 combination coupled to store a complementary data bit. The source (drain) of transistor 112 is coupled to the bit line BL while the source (drain) of transistor 116 is coupled to the complementary bit line BL; the gates of transistors 112 and 116 are both coupled to the word line WL. The writing of data to and the reading of data from the capacitors 114 and 118 is performed by charge transfer through the bit line BL and complementary bit line BL, respectively, in response to the logical state of the word line WL.
More specifically, data is written to the storage element 110 by first activating the word line WL, which effectively turns on the transistors 112 and 116 (e.g. places the transistors in a conducting state). Data supplied on the bit lines BL and BL is then subsequently stored within the capacitors 114 and 118, respectively. Data is read from the storage element 110 in a similar fashion, by first activating the word line WL. Data stored in the capacitors 114 and 118 is then subsequently read out via the bit lines BL and BL, respectively.
The compare circuit 120 compares the data stored in the storage element 110 with comparand data provided on compare signal lines CL and CL. Compare circuit 120 includes NMOS transistors 122, 124, 126, and 128 coupled to perform the comparison function. Transistors 122 and 124 are coupled in series to form a first path through the compare circuit 120, and transistors 126 and 128 are coupled in series to form a second path through the compare circuit 120. The drains (sources) of transistors 122 and 126 are coupled to the match line ML, while the sources of transistors 124 and 128 are coupled to a low voltage source VSS (e.g. ground). The capacitor 114 couples to control the gate of transistor 124 using the stored data of the storage element 110, while the capacitor 118 couples to control the gate of transistor 128 using the stored complementary data of the storage element 110. The compare lines CL and CL couple to control the gates of transistors 126 and 122, respectively.
During a compare operation, the match line ML is pre-charged to a high voltage (e.g. logical one state) to signal a “hit” condition. If the data stored in the storage element 110 matches the comparand data provided on the compare line CL and the complementary compare line CL, the transistors 122, 124, 126, and 128 form an open circuit between the match line ML and the low voltage source VSS. Thus, the match line ML remains charged to the logical one state. However, in the event of a mismatch, the transistors 122, 124, 126, and 128 will form a short circuit between the match line ML and the low voltage source VSS. Thus, the match line ML will be subsequently discharged to the low potential VSS (e.g. logical zero state) to signal a “miss” condition.
Over time, charges stored on capacitors 114 and 118 may gradually dissipate, via leakage current, through the transistors 112 and 116. For example, charge may be lost due to channel leakage (drain to source), gate leakage (gate to source, gate to drain), drain leakage (drain to substrate, drain to VDD), leakage from the capacitors 114 and 118, and/or any combination thereof. Thus, a logical one data bit stored on the capacitor 114 or 118 may eventually become a logical zero data bit. This makes maintaining the data stored within the storage element 110 a critical issue.
Prior methods for addressing this issue have been to bias the substrate in which the transistors 112 and 116 are disposed. For example, applying a negative voltage bias to the substrate (transistor well) effectively increases the gate threshold voltages of the transistors 112 and 116. This decreases the likelihood that a channel will be formed between the source and drain of the transistors 112 and 116, thus reducing leakage of the charges stored on capacitors 114 and 118, respectively. It is important to note that biasing the substrate merely reduces the cumulative leakage current, it does not prevent current from leaking altogether. On the other hand, biasing the substrate may have an adverse affect on the compare circuit 120. For example, in the interest of maximizing die space, the transistors 122, 124, 126, and 128 of the compare circuit 120 are typically disposed on the same substrate as the transistors 112 and 116 of the storage element 110. Thus, a bias applied to the substrate of transistors 112 and 116 is similarly applied to the transistors 122, 124, 126, and 128. This increases the threshold voltages VTH of the compare transistors 122, 124, 126, and 128, resulting in slower compare times and/or requiring a higher voltage be applied to the compare lines CL and CL during compare operations.
Alternatively, the compare circuit 120 may be disposed on a substrate that is isolated from that of the storage element 110, in order to prevent the bias applied to the storage element 110 substrate from adversely affecting that of the compare circuit 120. However, the result is in an overall increase in die area for the DCAM 100, as the storage element 110 and the compare circuit 120 must be disposed farther apart from one another.