The present invention relates to an improved architecture for associative memories used for in-memory analytics and the like and in particular to an architecture providing greater memory densities in integrated circuits.
A common computational task for an electronic computer is that of searching for a particular value in memory. For example, in routing packets over a network, it may be necessary to search for a packet address in memory in order to route the packet through a correct port.
Conventional random access memory operates by receiving an address designating a memory location and providing access to the data stored at that received address, for example, reading that data or modifying that data. In searching operations, random-access memories typically must access multiple memory addresses in series before a determination may be reached as to whether the data exists and its location. The time required to complete each sequential memory access can slow the searching operation.
Associative memories provide a faster way of searching for data. Such memories may receive the value of the data being searched for (a search pattern) and simultaneously review all memory addresses for that pattern. The associative memory typically returns a list of storage addresses holding data that matches the search pattern and these addresses may serve as a link to other needed data. A specialized processor (for example, a network processor) working with an associative memory can perform searches far in excess of the speeds obtainable with conventional random-access memory.
One form of associative memory is a (binary) content addressable memory (CAM) which stores binary values that can be compared to the binary search patterns. One drawback to such CAMs occurs when it is desired to search for a range, for example, a range of search pattern addresses between the upper and lower values. In such cases, each search pattern within the range of patterns must be stored in the CAM using valuable CAM memory space.
To address this drawback, ternary content addressable memories (TCAM) have been developed which store the binary values of 0 and 1 and also a “don't care” value (typically denoted X) indicating a special value that matches to either a 0 or 1 in the search pattern. With this additional don't care state, a range of search patterns, for example, 10000 to 10011 can be saved in a single TCAM memory location, for example, as 100XX.
Memories, including random access memories, CAMs and TCAMs, employ sense amplifiers which receive electrical signals from the transistor storage cells of each memory address and interpret those signals into binary (or don't care) values. In a conventional random access memory, sense amplifiers can be shared between different memory addresses because only one address will be accessed at a time. The sharing can be implemented, for example, by means of a multiplexer switching the sense amplifier between portions of the memory according to a portion of the memory address used to access the memory.
In contrast, associative memory is intended to operate in parallel over all memory addresses and accordingly separate sense amplifiers must be provided for each memory address to permit parallel searching of the entire memory range.
The sense amplifiers can consume a substantial area in an associative memory, and in fact a majority of the area of the associative memory circuit is normally occupied by sense amplifiers. The result is that associative memories are relatively expensive on a per bit basis when compared to conventional random access memory. The expense of associative memory substantially limits its use.