The present invention relates to a system and method for searching input keys, and, more particularly, to a system of concatenated Associative Search Engines (ASEs) and a method of integrating the ASEs to enable high-performance searching of multiple-field or multi-dimensional keys.
The use of external memories, in particular DRAMs, to increase the storage capacity of a generic ASE, and in particular, the Range Search Engine (RSE) of HyWire Ltd., was disclosed in a co-pending U.S. patent application (Ser. No. 10/688,986) entitled “Multi-Dimensional Associative Search Engine Having An External Memory”, which is incorporated by reference for all purposes as if fully set forth herein. The external memories are controlled by a Memory Control Logic, which can be located inside or outside the RSE, and are connected to the RSE via a Control & Data Bus (CDB).
The RSE-chained coprocessor is connected to a Network Processing Unit (NPU); it provides a unique and flexible way of parsing the packet headers of the incoming packets according to a set of rules determined by the NPU, concurrently performing several search operations on the parsed information in different memory tables, and combining the search results. These results can be used for high-performance packet forwarding, classification, security, accounting and billing, statistics, etc., thus significantly offloading the NPU, as all these (and in particular packet classification) are processor-intensive tasks.
In some state-of-the-art configurations of coprocessors operating with NPUs, several search engines or coprocessors are used in parallel to search multiple-field keys, each engine being designed to handle one or more fields of these keys. The relevant multiple-field key (or keys) must be parsed and submitted to each search engine, and the result signals arriving from each engine must be processed. Such an architecture requires a large number of input/output pins in the NPU, makes inefficient use of the bus bandwidth, and loads the NPU.
One alternative to reduce the pin count and improve the bus bandwidth utilization in the NPU is the use of a Supervisory Coprocessor, as shown in FIG. 1a. The Supervisory Coprocessor receives data packets from the NPU, and distributes tasks to a plurality of ASEs. This coprocessor may parse the packet headers of the incoming packets (instead of the NPU) according to a set of rules determined by the NPU. The ASEs can concurrently perform search operations on the parsed information in different memory tables, and transfer the search results to the Supervisory Coprocessor, which may combine them and provide the combined result to the NPU. In this configuration, the Supervisory Coprocessor offloads the NPU of the packet header parsing and the search result combination.
One commercial version of this configuration, schematically depicted in FIG. 1b, utilizes the Vichara™ 81000 Search Supervisory Coprocessor (disclosed by Cypress Semiconductor Corporation, “Cypress Announces Industry's First Search Supervisory Coprocessor, Providing Comprehensive Search-System Management”, The design is relatively complex, and appears merely to shift the problems of excessively-large pin count and inefficient use of bus bandwidth from the NPU to the Search Supervisory Coprocessor. The use of a supervisory coprocessor, disposed inside or outside the NPU, requires a powerful driver having a large fan-out to drive all the ASEs, or several individual drivers for one or more ASEs. This configuration also requires a complex multiplexer to combine and synchronize the outputs of all the ASEs for integrated high-performance complex search operations required for multi-dimensional classification, forwarding, content search, etc.
There is therefore a recognized need for, and it would be highly advantageous to have, a packet co-processing system of linked Associative Search Engines (ASEs) and a method of integrating the ASEs that enable high-performance searching of multiple-field or multi-dimensional keys, through efficient use of bus bandwidth and without an excessive pin-count requirement.