Longest Prefix Match (LPM) is a problem of finding the longest prefix among a number of prefixes stored in a data base that matches a given lookup key. LPM can be used in many applications and is not limited to IP routing, but since IP routing is one of the major LPM applications, the present invention will be discussed in a context of IP routing, by way of non-limiting example, only.
The growth of the Internet and the demand for increased bandwidth of networks has necessitated search capabilities that traditional RAMs (Random Access Memory) can barely meet. In a typical routing lookup operation, Internet routers look up the destination address of an incoming packet in its forwarding table to determine the packet's next hop on its way to the final destination. This operation is performed on each arriving packet by every router in the path that the packet takes from its source to the destination.
The adoption of Classless Inter-Domain Routing (CIDR) since 1993 means that a routing lookup operation requires performing a Longest Prefix Match (LPM), in which a match for the longest prefix with the lookup key is performed (wild card bits allowed), rather than a full match to every bit of the lookup key. A network processor, router, bridge, switch, or other network device performing similar routing lookup functions, maintains a set of destination address prefixes in a forwarding table, also known as a Forwarding Information Base (FIB). A FIB contains a set of prefixes with corresponding output interfaces indicating destination addresses. LPM is used in IPv4 and IPv6 routers to select the most appropriate entry in the routing/forwarding table, which indicates the proper output interface to which the input interface should send a packet to be transmitted by the router.
Given a packet, the Longest Prefix Match operation consists of finding the longest prefix in the FIB that matches the lookup key, for determining the destination of the packet to be transmitted by the router. The most commonly used lookup keys are the 32 bit address in Internet Protocol version 4 (IPv4), supporting an address space of 232-about 109 addresses, or the 128 bit address Internet Protocol version 6 (IPv6), supporting an address space of 2128-about 1039 addresses.
A FIB can be implemented based on Random Access Memories (RAMs). In this case, the FIB prefixes are held in one or more RAMs, and software or hardware which executes search algorithms, such as M-Trie, Bitmap Tree, etc. (which are typically tree based search algorithms) performs the LPM. In a RAM, information is stored and retrieved at a location determined by an address provided to the memory.
Another alternative for implementing a FIB is by using a fast hardware lookup device, such as a TCAM. In a CAM (Content Addressable Memory), a search datum is provided to the CAM and every location in the memory is compared to it in parallel. The CAM responds with either a “match” or “mismatch” signal and returns the address, the contents of which matched the search data. CAM performs exact (binary) match searches, while the more powerful Ternary CAM (TCAM) adds masking capabilities, by storing and searching a third “don't care” state, enabling pattern matching with the use of “don't care” states. “Don't care” states act as wildcards during a search, and are thus particularly attractive for implementing longest prefix matching (LPM).
TCAMs have been also used for speeding up the destination address search function and quality of service (QoS) lookup required by network routers. When using TCAMs for FIB implementation, the lookup time is constant and there is no search throughput performance penalty, as opposed to a performance penalty incurred when using RAMs for searching. On the other hand, TCAMs are substantially larger and more complex devices than RAMS and also dissipate significantly more power than RAMs, due to their parallel search capability. Another TCAM disadvantage is a lack of error correction protection (ECC or similar), which reduces the overall system availability, due to the higher probability of an incidence of a memory bit error, particularly as the size of utilized TCAM memory increases.
While advanced RAM based FIB implementations/algorithms are scalable in terms of the number of prefixes they can hold, they are not scalable from the lookup key width perspective, as the number of accesses to FIB memory depends on the lookup key width. In general, the wider the key, the more lookups are required, and since prefixes will occupy more than one RAM entry, an execution time penalty for searching, proportional to the key width, will result, due to the tree based algorithms used.
A TCAM based FIB, on the other hand, is not scalable in terms of number of prefixes, as it holds a fixed number of entries. The most advanced TCAM device known today has a capacity of 20 Mbit, which can be used to hold up to 0.5 M prefixes.
Accordingly, there is a long felt need for a system and method for performing a Longest Prefix Match (LPM) operation in a Forwarding Information Base (FIB), which would exploit the respective advantages of TCAMs and RAMs and overcome the respective shortcomings of TCAMs and RAMs, to achieve a scalable FIB organization including a flexible number of entries and a constant LPM lookup time, and it would be very desirable to implement wire-speed packet forwarding using such a system.