The present invention relates to content addressable memory (CAM). In particular, the present invention relates to a circuit and method for high speed precharging of bitlines in an open bitline architecture CAM device.
In many conventional memory systems, such as random access memory, binary digits (bits) are stored in memory cells, and are accessed by a processor that specifies a linear address that is associated with the given cell. This system provides rapid access to any portion of the memory system within certain limitations. To facilitate processor control, each operation that accesses memory must declare, as a part of the instruction, the address of the memory cell/cells required. Standard memory systems are not well designed for a content based search. Content based searches in standard memory require software based algorithmic search under the control of the microprocessor. Many memory operations are required to perform a search. These searches are neither quick nor efficient in using processor resources.
To overcome these inadequacies an associative memory system called Content Addressable Memory (CAM) has been developed. CAM allows cells to be referenced by their contents, so it has first found use in lookup table implementations such as cache memory subsystems and is now rapidly finding use in networking systems. CAM""s most valuable feature is its ability to perform a search and compare of multiple locations as a single operation, in which search data is compared with data stored within the CAM. Typically search data is loaded onto search lines and compared with stored words in the CAM. During a search-and-compare operation, a match or mismatch signal associated with each stored word is generated on a matchline, indicating whether the search word matches a stored word or not. A typical word of stored data includes actual data with a number appended header bits, such as an xe2x80x9cExe2x80x9d bit or empty bit for example, although the header bits are not specifically searched during search-and-compare operations.
A CAM stores data in a matrix of cells, which are generally either SRAM based cells or DRAM based cells. Until recently, SRAM based CAM cells have been most common because of their simple implementation. However, to provide ternary state CAMs, ie. where the search and compare operation returns a xe2x80x9c0xe2x80x9d, xe2x80x9c1xe2x80x9d or xe2x80x9cdon""t carexe2x80x9d result, ternary state SRAM based cells typically require many more transistors than a DRAM based cells. As a result, ternary state SRAM based CAMs have a much lower packing density than ternary DRAM cells.
A typical CAM block diagram is shown in FIG. 1. The CAM 10 includes a matrix, or array 100, of DRAM based CAM cells (not shown) arranged in rows and columns. An array of DRAM based ternary CAM cells have the advantage of occupying significantly less silicon area than their SRAM based counterparts. A predetermined number of CAM cells in a row store a word of data. An address decoder 17 is used to select any row within the CAM array 100 to allow data to be written into or read out of the selected row. Data access circuitry such as bitlines and column selection devices, are located within the array 100 to transfer data into and out of the array 100. Located within CAM array 100 for each row of CAM cells are matchline sense circuits, which are not shown, and are used during search-and-compare operations for outputting a result indicating a successful or unsuccessful match of a search word against the stored word in the row. The results for all rows are processed by the priority encoder 22 to output the address (Match Address) corresponding to the location of a matched word. The match address is stored in match address registers 18 before being output by the match address output block 19. Data is written into array 100 through the data I/O block 11 and the various data registers 15. Data is read out from the array 100 through the data output register 23 and the data I/O block 11. Other components of the CAM include the control circuit block 12, the flag logic block 13, the voltage supply generation block 14, various control and address registers 16, refresh counter 20 and JTAG block 21.
FIG. 2 shows a typical ternary DRAM type CAM cell 140 as described in Canadian Patent Application No. 2,266,062, filed Mar. 31, 1999, the contents of which are incorporated herein by reference. Cell 140 has a comparison circuit which includes an n-channel search transistor 141 connected in series with an n-channel compare transistor 142 between a matchline ML and a tail line TL. A search line SL* is connected to the gate of search transistor 141. The storage circuit includes an n-channel access transistor 143 having a gate connected to a wordline WL and connected in series with capacitor 144 between bitline BL and a cell plate voltage potential VCP. Charge storage node CELL1 is connected to the gate of compare transistor 142 to turn on transistor 142 if there is charge stored on capacitor 144 i.e. if CELL1 is logic xe2x80x9c1xe2x80x9d. The remaining transistors and capacitor replicate transistors 141, 142, 143 and capacitor 144 for the other half of the ternary data bit, and are connected to corresponding lines SL and BL* and are provided to support ternary data storage. Together they can store a ternary value representing logic xe2x80x9c1xe2x80x9d, logic xe2x80x9c0xe2x80x9d, or xe2x80x9cdon""t carexe2x80x9d.
The tail line TL is typically connected to ground and all the transistors are n-channel transistors. The description of the operation of the ternary DRAM cell is detailed in the aforementioned reference.
As previously mentioned, memory array 100 uses DRAM type memory cells to attain a higher density of cells per unit area of silicon, which has the benefit of reducing the overall cost of manufacturing. However, within the field of DRAM memory, there are two well known architectures used for arranging the memory cells and bitlines, which when applied to the ternary CAM of the present invention, each provide distinct advantages and disadvantages to the CAM device.
The first architecture is the open bitline architecture, generally shown in FIG. 3. The arrangement shown in FIG. 3 is representative of the physical layout of the bitlines with respect to the bitline sense amplifier (BLSA) on a fabricated device. Wordlines, memory cells and read/write circuits are intentionally omitted to simplify the schematic. But it will be understood by those skilled in the art that wordlines would run perpendicular to the bitlines, memory cells would be located near the intersection between each wordline and bitline, and read/write circuits are coupled to the bitlines. Complementary bitlines 32 and 34 extend away from the left and right sides of the bitline sense amplifier (BLSA) 33. A bitline sense amplifier such as BLSA 33 is well known in the art and typically includes a pair of cross-coupled complementary pair of CMOS transistors. An n-channel equalization transistor 31 is connected between bitlines 32 and 34 for electrically shorting the two bitlines together, and has a gate controlled by a bitline equalization signal BLEQ. Bitlines 32 and 34, equalization transistor 31 and BLSA 33 form one open bitline pair. Another bitline pair consisting of bitlines 36 and 37, equalization transistor 35 and BLSA 38 are configured identically to their corresponding elements from the first open bitline pair. In a memory array, a plurality of open bitline pairs are arranged one below the other as shown in FIG. 3, in which all the bitlines connected to the left side of the BLSA""s are part of the left sub-array and all the bitlines connected to the right side of the BLSA""s are part of the right sub-array. For DRAM memories, it is necessary to precharge bitlines, through bitline precharge transistors (not shown), to a mid-point potential level prior to reading data from a DRAM memory cell connected to it. This mid-point potential level is typically half the high power supply potential used by the bitline sense amplifiers. This is to allow the bitline sense amplifier to detect small changes in the potential level of the bitline when charge is added or removed by the memory cell storage capacitor.
A brief discussion of a read and precharge operation for the open bitline architecture of FIG. 3 follows. It is assumed that all bitlines have been precharged to a mid-point potential level between a high and a low logic potential level in a previous operation. During a read operation, one wordline of either the left or right sub-array is driven to access one memory cell connected to each bitline of the respective sub-array. The bitlines of the unaccessed sub-array remain at the mid-point potential level, which is the reference potential level used by the BLSA during sensing of the data on the bitlines of the accessed sub-array. The BLSA detects the shift in the potential level of the bitline when the storage capacitor of the accessed memory cell is coupled to the bitline, and amplifies and latches the full CMOS logic potential level of the bitline. Since BLSA is a cross-coupled latch circuit, the accessed bitline and its corresponding complementary bitline are driven to opposite logic potential levels after the data has been read out, and since the selected wordline remains activated, the full CMOS level is restored into the accessed cell.
To precharge the bitlines in preparation for the next read operation, control signal BLEQ is driven to the high logic level to turn on all equalization transistors and short each complementary pair of bitlines together. The bitlines having the high logic potential level will equalize with the bitlines having the low logic potential level through charge sharing, such that both reach a mid-point potential level.
The open bitline architecture allows for efficient packing of ternary CAM memory cells to reduce the overall area occupied by the memory array. One disadvantage of the open bitline architecture is unbalanced bitlines due to capacitive coupling of an active wordline to only one bitline of the complementary pair of bitlines. The bitline acting as a reference bitline is not crossed by an active wordline, thus it is not disturbed in the same way as the bitline crossing an active wordline. Therefore potential read errors may result. Another more significant disadvantage is the slow precharge speed. As memory densities grow, the bitlines become longer, which inherently have more resistance and capacitance than shorter bitlines. The precharge and equalization speed of the bitlines could be improved if an additional equalization transistor was connected between the two farthest ends of the complementary bitlines, instead of just at the two closest ends as shown in FIG. 3. However, it is impractical to add such an additional equalization transistor. The metal lines for connecting such an additional equalization transistor would be as long as the bitlines, hence contributing more capacitance to the system. Therefore, when equalization is slow, the overall access speed of the CAM becomes slow, which restricts the CAM from being used in high speed applications.
The second architecture is the folded bitline architecture, generally shown in FIG. 4. The arrangement shown in FIG. 4 is representative of the physical layout of the bitlines with respect to the bitline sense amplifier (BLSA) on a fabricated device. Wordlines, memory cells and read/write circuits are intentionally omitted to simplify the schematic. But it will be understood by those skilled in the art that wordlines would run perpendicular to the bitlines, memory cells would be located near the intersection between each wordline and bitline, and read/write circuits are coupled to the bitlines. Complementary bitlines 46 and 47 extend away from the left side of a shared bitline sense amplifier (BLSA) 41, and complementary bitlines 48 and 49 extend away from the right side of BLSA 41. A shared bitline sense amplifier such as BLSA 41 is well known in the art, and would typically consist of a pair of cross-coupled complementary pair of CMOS transistors. N-channel equalization transistors 42 and 43 are connected between bitlines 46 and 47 at opposite ends of bitlines 46 and 47. Similarly, n-channel equalization transistors 44 and 45 are connected between bitlines 48 and 49 at opposite ends of bitlines 48 and 49. Equalization transistors 42 and 43 have gates controlled by a left sub-array bitline equalization signal BLEQ_L, and equalization transistors 44 and 45 have gates controlled by a right sub-array bitline equalization signal BLEQ_R. In a typical array, a shared BLSA and respective pairs of folded bitlines are arranged in a column, and several columns can be arranged side by side. In FIG. 4, bitlines 46 and 47 and equalization transistors 42 and 43 are located within a left sub-array, and bitlines 48 and 49 and equalization transistors 44 and 45 are located within a right sub-array.
A brief discussion of a read and precharge operation for the folded bitline architecture of FIG. 4 follows. It is assumed that all bitlines have been precharged to a mid-point potential level between a high and a low logic potential level in a previous operation. During a read operation, one wordline of either the left or right sub-array is driven to access one memory cell connected each bitline, BL0 or BL0* for example, of the respective sub-array, and the corresponding equalization control signal, BLEQ_L or BLEQ_R is turned off. The folded complementary bitlines of the unaccessed sub-array, BL1 and BL1* for example, remain at the precharged mid-point potential level. If a memory cell connected to BL0 is accessed by the driven wordline, then the complementary bitline BL0* remains at the precharged mid-point potential level, which is the reference potential level used by BLSA 41. Accordingly, the role of each bitline is reversed if a memory cell connected to BL0* is accessed instead of a memory cell connected to BL0. Furthermore, the roles of both folded bitline pairs is reversed if a driven wordline accesses a memory cell connected to either BL1 or BL1*. Since BLSA is a cross-coupled latch circuit, the accessed bitline and its corresponding complementary bitline are driven to opposite logic potential levels after the data has been read out. To precharge the bitlines in preparation for the next read operation, the equalization signal (BLEQ_L or BLEQ_R) for the accessed sub-array is driven to the high logic level to turn on its respective equalization transistors. The bitlines having the high logic potential level will equalize with the bitlines having the low logic potential level through charge sharing, such that both reach a mid-point potential level. The bitlines of the unaccessed sub-array remain precharged throughout the read operation. Because equalization transistors 42, 43 and 44, 45 are placed near the two extremities of their respective folded bitline pairs, the time required for equalization is short when compared to the equalization speed of the open bitline architecture shown in FIG. 3.
Given that the bitlines of FIGS. 3 and 4 are the same length and width, the time constant for each bitline in FIG. 3 is expressed as xcfx84open=RC, where R is the lumped resistance and C is the lumped capacitance of the bitline. Each bitline of FIG. 4 has half the resistance and capacitance of a bitline of FIG. 3 due to the additional equalization transistor placed at the extremities of the folded bitlines. Therefore, relative to the bitlines of FIG. 3, the time constant is expressed as       τ    ⁢          xe2x80x83        ⁢    folded    =            R      2        ⁢          xe2x80x83        ⁢                  C        2            .      
Accordingly, the time required for equalizing a bitline of FIG. 4 is approximate four times faster than the time required for equalizing a bitline of FIG. 3.
There exist precharge schemes in which equalization transistors are not used for precharging bitlines to a mid-point potential level. Instead, a precharge voltage is supplied directly to the bitlines. Unfortunately, the circuit for generating such a precharge voltage must be of high quality, which is difficult to design and is subject to variations in the semiconductor fabrication process.
Despite the precharge speed advantage of the folded bitline architecture over the open bitline architecture, the folded bitline architecture does not allow efficient packing of ternary dynamic CAM cells. For highest packing density, ternary dynamic CAM cells are arrayed as a single line of cells under a common wordline node as well as a common matchline node. As such, adjacent bitlines are necessarily active during row access operations. This excludes the use of a folded bitline architecture which requires adjacent bitlines to act as precharge-level references. However, a ternary dynamic CAM memory array employing an open bitline architecture is not suitable for high speed applications due to its slower precharge speed.
It is therefore desirable to provide a ternary dynamic CAM memory array architecture which operates at high speed and arranged with an efficient packing density to occupy small silicon area.
It is an object of the present invention to obviate or mitigate at least one disadvantage of previous ternary dynamic CAM memory array architectures. In particular, it is an object of the present invention to provide a ternary dynamic CAM memory array architecture that operates at high speed and occupies a small silicon area.
In a first aspect, the present invention provides a bitline precharge circuit for equalizing a first and second bitline. The circuit includes a bitline overwrite circuit for writing preset complementary logic potential levels onto the first and second bitlines, and an equalization circuit for shorting together the first and second bitlines after the preset complementary logic potential levels are written onto the first and second bitlines.
In further embodiments of the present aspect, the bitline overwrite circuit includes bitline write drivers connected to respective databuses, or a local bitline write circuit. In another aspect of the present alternate embodiment, the local bitline write circuit includes a transistor for coupling the first bitline to a low logic potential level and a transistor for coupling the second bitline to a high logic potential level.
In yet another alternate embodiment of the present aspect, the equalization circuit includes at least one equalization transistor connected between the first and second bitlines, or two equalization transistors connected between the first and second bitlines, where the first and second equalization transistors are connected at opposite ends of the first and second bitlines, respectively.
In another aspect, the present invention provides a bitline architecture for a ternary content addressable memory. The bitline architecture includes a first bitline sense amplifier connected to first and second complementary bitlines arranged in an open bitline configuration, a second bitline sense amplifier connected to third and fourth complementary bitlines arranged in an open bitline configuration, ternary content addressable memory cells for storing two bits of data connected to the first and third bitlines, ternary content addressable memory cells for storing two bits of data connected to the second and fourth bitlines, a first bitline overwrite circuit for writing preset complementary logic potential levels onto the first and third bitlines, a second bitline overwrite circuit for writing preset complementary logic potential levels onto the second and fourth bitlines, a first precharge circuit for equalizing the first and third bitlines, and a second precharge circuit for equalizing the second and fourth bitlines.
In an alternate embodiment of the present aspect, the first and second bitline sense amplifiers include CMOS cross coupled inverters. In another alternate embodiment of the present aspect, the ternary content addressable memory cells are ternary DRAM type CAM cells.
In a further aspect of the present invention, there is provided a content addressable memory. The content addressable memory consists of content addressable memory cells arranged in rows and columns, each cell having a first and second bitline, a bitline overwrite circuit for each pair of first and second bitlines for writing preset complementary logic potential levels onto the first and second bitlines, an equalization circuit for each pair of first and second bitlines for shorting together the first and second bitlines after the preset complementary logic potential levels are written onto the first and second bitlines, an address decoder for addressing rows of cells, write data circuitry for writing data to the cells, and read circuitry for reading data from the cells.
In yet another aspect of the present invention, there is provided a method for precharging first and second bitlines in a content addressable memory. The method consists of writing preset complementary logic potential levels onto the first and second bitlines, and equalizing the first and second complementary signal lines.