In conventional memory systems, such as random access memory (RAM), binary digits (bits) are stored in memory cells, and are accessed by a processor that specifies a linear address associated with the accessed location. To ensure proper processor control, each operation that accesses memory must declare, as a part of an access instruction, the address of the memory cell/cells. Conventional memory systems are not well designed for a content based search. Content based searches in conventional RAMs require a software based algorithmic search, controlled by a microprocessor. Many memory access operations are required to perform a search. For this reason, searches in conventional RAMs are neither quick nor efficient in using processor resources.
To overcome these inadequacies, an associative memory system called Content Addressable Memory (CAM) has been developed. CAM allows cells to be referenced by their contents. Thanks to this feature, CAMs have first found use in lookup table implementations such as cache memory subsystems and are now rapidly finding use in networking systems. CAM's most valuable feature is its ability to perform a search on multiple locations as a single operation, in which searched data (also referred to as a “search key”) is compared to data stored within the CAM. Typically, a search key is loaded onto search lines and compared to words stored in CAM. During a search operation, a match or mismatch signal associated with each stored word is generated on a matchline, indicating whether the search key matches a stored word or not.
CAM stores data in an array of cells, which are generally either SRAM based cells or DRAM based cells. Until recently, SRAM based CAM cells have been most common because of their simpler implementation. However, SRAM based cells require more transistors than DRAM based cells. As a result, SRAM based CAMs have much lower packing density than DRAM based CAMs.
A typical CAM block diagram is shown in FIG. 1. The CAM 10 includes a memory array 25, of CAM cells (not shown) arranged in rows and columns. A predetermined number of CAM cells in a row store a word of data. An address decoder 17 is used to select any row within the CAM array 25 to allow data to be written into or read out of the selected row. Data access circuitry such as bitlines, column selection devices, and wordline drivers, are located within the array 25 to transfer data into and out of the array 25. Located next to CAM array 25 for each row of CAM cells are matchline sense circuits (not shown), which are used during search operations for outputting a result indicating a successful or unsuccessful match of a search key against the word stored in the row. The results for all rows are processed by a priority encoder 22 to output the address (Match Address) corresponding to the location of a matched word. The match addresses are stored in match address registers 18 before being output by a match address output block 19. Data is written into array 25 through a data I/O block 11 and various data registers 15. Data is read out from the array 25 through data output register 23 and the data I/O block 11. Other components of the CAM include a control circuit block 12, flag logic block 13, various control and address registers 16, and refresh counter 20. A JTAG block and voltage supply generation block can optionally be used in conjunction with FIG. 1, as would be apparent to one skilled in the art.
FIG. 2 depicts a hierarchical view of the typical CAM array 25. CAM array 25 includes CAM cells 30 and a matchline sense circuit block 26. CAM cells 30 of the CAM array 25 are arranged in rows and columns. CAM cells 30 of a row are connected to a common matchline MLi, word line WLi and ground line, or tail line, TLi; CAM cells 30 of a column are connected to a common pair of search lines SLjb/SLj and a common pair of bitlines BLj/BLjb, where i is an integer value between 0 and n, and j is an integer value between 0 and m. Located next to the CAM array 25 for each row is matchline sense circuit block 26. Matchline sense circuit block 26 includes one matchline sense circuit 27 connected to a respective matchline MLi and tail line TLi. Both MLi and TLi are used during search operations for outputting match signals ML_OUT0 through ML_OUTn indicating a successful or unsuccessful match of a search key against the stored word. Matchlines MLi and tail lines TLi are connected to their respective matchline sense circuits 27, and tail lines TLi for some implementations can be selectively or permanently connected to ground. Although not shown, the matchline sense circuits 27 also receive control signals to control their operation, and a person skilled in the art would understand that such control signals are necessary for proper operation of the circuit. As the matchlines and tail lines connect to each CAM cell in the row, mismatch in a majority of cells in the same row will result in faster change of voltage difference between MLi and TLi. Mismatch in only a few CAM cells will drain less current and result in a much slower voltage difference change.
There is a number of known CAM cell schemes. A good source that includes a description of a few such schemes is “Content Addressable Memory Core Cells. A survey” by Kenneth J. Schultz published in the VLSI journal of INTEGRATION 23 (1997) pp. 171–188, the contents of which are incorporated herein by reference. The most relevant, yet quite different, scheme considered among such prior art schemes is shown in FIG. 3A. This scheme is a typical ternary DRAM based CAM cell 30 as described in U.S. Pat. No. 6,320,777 issued on Nov. 20, 2001, the contents of which are also incorporated herein by reference. Cell 30 has a comparison circuit which includes an n-channel search transistor 31 connected in series with an n-channel compare transistor 32 between a matchline ML and a tail line TL. A search line SLb is connected to the gate of search transistor 31. The storage circuit includes an n-channel access transistor 33 having its gate connected to wordline WL and connected in series with capacitor 34 between bitline BL and a cell plate voltage potential VCP. Charge storage node CELL1 is connected to the gate of compare transistor 32 to turn the transistor 32 on or off depending on charge stored in capacitor 34 i.e. if CELL1 is logic “1” or logic “0”. The remaining transistors and capacitor replicate transistors 31, 32, 33 and capacitor 34 for the other half of the ternary data bit, and are connected to corresponding lines SL and BLb and are provided to support ternary data storage. Together they can store a ternary value representing logic “1”, logic “0”, or “don't care”, as shown in Table 1.
TABLE 1Ternary ValueCELL1CELL2001110“Don't Care”00
In some matchline sensing schemes of the prior art, each matchline is initially precharged high to the full VDD supply. A matchline will be discharged to ground through channels of transistors 31, 32 if the contents of its stored word do not match, i.e. mismatch, the search key, but will remain at the VDD level if the stored word matches the search key. Each matchline voltage level is sensed by a matchline sensing circuit which generates a result of the comparison of matchline voltage level to a reference voltage level. Other variations of sensing schemes are also known. However, in general, the matchline voltage level changes when a mismatch occurs, since it will be discharged to ground or VDD level. In the case of a match, the matchline is not discharged, and the matchline voltage level does not change.
The tail line TL is typically connected to ground. Because n-channel transistors have higher efficiency, all the transistors tend to be n-channel rather then p-channel. The description of the operation of the ternary DRAM cell is detailed in the aforementioned issued U.S. Pat. No. 6,320,777.
FIG. 3B illustrates a traditional SRAM-based ternary CAM cell. Two memory cells, SRAM Cell 1 and SRAM Cell 2, are provided on the P side and Q side, respectively of the ternary CAM cell. The SRAM Cell 1 provides as an output at a node thereof a signal SNP, which in FIG. 3B is provided to the gate of transistor M2. The SRAM Cell 2 provides as an output at a node thereof a signal SNQ, which in FIG. 3B is provided to the gate of transistor M4. Transistors M1 and M2 are connected in series between ML and TL and are used to perform search operations relating to the SRAM Cell 1 on the P side. Transistors M3 and M4 are connected in series between ML and TL and are used to perform search operations relating to the SRAM Cell 2 on the Q side. Transistor pairs M1, M2 and M3, M4 constitute two so called search stacks—each pair being provided as a search stack for the P and Q side, respectively.
The cell in FIG. 3B operates as follows. Information is stored in the cell according to Table 2 below. Note that data is encoded as shown in the table for proper operation of the search stacks.
TABLE 2Truth Table for Ternary Data for CAM cell in FIG. 3BDataSNPSNQSLPSLQ0011011001“Don't Care”0000
Prior to a search operation, data has been written into CAM cell using well known and understood techniques. A search is initiated by placing the search key information onto the searchlines (SL). The cell is arranged such that when the comparison between the stored data and the search data results in a match, neither side of the cell (P side or Q side) produces a conductive path between the matchline (ML) and tail-line (TL), via M1 and M2, or M3 and M4. However if there is a mismatch, then at least one search stack of the cell will be conductive and will cause the ML voltage to equalize to the TL voltage level. The sensing scheme can be built in many different ways, and its main function is determining whether or not there is a match between stored data and the search key, and providing conductive path between the ML and TL in case of mismatch. The more CAM cells one places on a single ML, the denser is the CAM as less sense-amplifiers are required. However, performance can suffer due to accumulated parasitic capacitance that increases proportionally with the number of cells on the same ML.
One of the great challenges in the design of an integrated CAM is dealing with large capacitance of the MLs during a search operation. This poses problems in three areas. First, the speed of the search operation is generally limited to how fast the ML voltage level can change when it is sensed to indicate whether or not a conductive path is present. This speed is proportional to the value of ML parasitic capacitance and the current of the conductive path between matchline and tail line. Time t needed for ML to TL voltage to develop voltage difference of V volts with capacitance between ML and TL of C and the conductive path current I is given by t=CV/I. It is possible to reduce the ML capacitance by segmenting the matchline into smaller sections, as described in U.S. Pat. No. 6,584,003 issued on Jun. 24, 2003 to Kim et al., which is incorporated by reference herein. However, this ML capacitance reduction comes at the expense of silicon area due to the additional sense circuitry required. Second, the approximate power budget during a search operation can be expected to be about 40% SL power, 40% ML power and 20% power of peripheral circuitry. Note that search operation power is far and away the largest dynamic power consumption for a CAM chip. Since ML power is proportional to CV2f, any reduction in ML capacitance will directly reduce matchline portion of the search power consumption. Third, as all matchline sense amplifiers are activated simultaneously during search operation, a huge power spike is produced. This can be particularly problematic since it can cause significant power rail noise or even power grid collapse.
The ML capacitance has contributions from each of the following components: the wire capacitance of the ML; and source and drain capacitance of M1, M2, M3 and M4, which in turn each consist of a number of components. The latter will also change depending on what data pattern is applied as a search key. In fact it has been found that the worst case ML capacitance occurs when one of the search lines is high.
CAM cells are known with as many as two transistors directly connected to matchline, each transistor contributing to matchline capacitance, as described above. Up to four transistors are typically provided in two search stacks, one for each side of the memory cell. Such CAM cells are described, for example, in the following three references: U.S. Pat. No. 6,483,733 issued to V. Lines et al. (Mosaid Technologies Inc.) on Nov. 19, 2002; U.S. Pat. No. 5,949,696 issued to N. B. Threewitt (Cypress Semiconductor Corporation) on Sep. 7, 1999; and U.S. Pat. No. 6,418,042 issued to Srinivasan et al. (NetLogic Microsystems, Inc.) on Jul. 9, 2002.
U.S. Pat. No. 6,154,384 issued to Nataraj et al. (NetLogic Microsystems, Inc.) on Nov. 28, 2000 describes a ternary content addressable memory cell, which includes a first memory cell, a compare circuit, a second memory cell and a mask circuit. The compare circuit of the '384 patent does not use four transistors in two stacks, as in the patents mentioned earlier. Rather, it includes three transistors that perform the comparison function, thereby reducing the matchline capacitance somewhat. However, there is a need for reducing the matchline capacitance even further, in order to improve speed and reduce power consumption and noise.
It is, therefore, desirable to provide a ternary CAM cell that provides reduced matchline capacitance and increased current for the conductive path between matchline and tail line.