A major task of a router is to forward an Internet Protocol (Internet Protocol, IP for short) packet, that is, forward a packet that arrives at an input port of the router to a correct output port according to a target IP address in a header of the packet. Routing search is a process in which a routing table in a router is searched according to a target IP address of a packet to obtain next-hop information of the packet.
Routing search uses a principle of longest prefix match (longest prefix match, LPM for short). When multiple prefixes match an input IP address, next-hop information corresponding to a prefix that is among prefixes matching the IP address and has the longest mask is a final search result.
Common routing search algorithms include a ternary content addressable memory (Ternary Content Addressable Memory, TCAM for short) algorithm, an algorithm based on a Trie tree, and the like.
The TCAM algorithm can simultaneously perform matching between an input IP address and all routing entries. A TCAM is expensive, and it is costly to implement routing search by using a TCAM. The TCAM has an integration level much lower than that of a common memory, and has a limited capacity; and it is very difficult to deal with processing on a million-scale routing table. In addition, the TCAM also has large power consumption. Therefore, currently, a multi-bit Trie algorithm is used more frequently.
The algorithm based on a Trie tree (also referred to as a prefix tree) creates a binary tree or a multifurcating tree according to a bit string in a prefix. If one bit is considered each time, a binary tree, also referred to as a single-bit Trie tree, is created. FIG. 1 shows a single-bit Trie tree that has 11 prefixes. Nodes that correspond to p0 to p10 on the left of FIG. 1 and in the single-bit Trie tree are represented by black circles, connecting points are represented by white circles, and each node is identified by using a unique number. If multiple bits are considered each time, a multi-bit Trie tree is created. In the multi-bit Trie tree, a quantity of bits considered each time is generally fixed, and is referred to as a stride (stride) of the Trie tree.
The multi-bit Trie tree may be considered as that a single-bit Trie tree is divided into multiple sub-trees according to a stride, and a Trie node is created for each sub-tree. Each Trie node has an associated prefix, and an associated prefix of a Trie node is a prefix value on a root node that corresponds to the Trie node and is in a sub-tree. If prefixes are distributed in a sub-tree, a prefix node further needs to be created for the sub-tree, and all the prefixes located in the sub-tree are saved on the prefix node, where each prefix corresponds to one piece of next-hop information. Generally, only a next-hop pointer, that is, a pointer that points to next-hop information, is saved in the multi-bit Trie tree. Each prefix may be divided into many segments according to a stride, and a bit string corresponding to each segment is referred to as a segment key (key) value of the prefix in the segment. For example, in a case in which a stride is equal to 3, the first segment key value of a prefix ‘10101*’ is ‘101’, and the second segment key value is ‘01*’.
FIG. 2 shows a multi-bit trie tree that is created based on the prefixes in FIG. 1 and has a stride equal to 3. The multi-bit Trie tree includes 7 Trie nodes, that is, a Trie Node T1 to a Trie Node T7 shown in FIG. 2. Each Trie node is configured with a prefix node, that is, a Prefix Node shown in FIG. 2. Each Prefix Node saves a next-hop pointer of each prefix on a Trie node corresponding to the Prefix Node. For example, a Prefix node (where the Prefix node represents a prefix node) corresponding to the Trie Node T4 in FIG. 2 saves a next-hop pointer “RE Index of P3” of the prefix p3. Each prefix node except a prefix node corresponding to the Trie Node T1 further saves a longest prefix match (present longest prefix match, PLPM for short). A PLPM of a Prefix node is a longest prefix that covers a sub-tree corresponding to the Prefix node. “RE Index of PLPM” on a Prefix node in FIG. 2 represents a next-hop pointer of a PLPM of the Prefix node.
To improve routing search performance, routing search generally uses a search apparatus implemented by hardware. The search apparatus implemented by hardware generally uses a multi-stage pipeline structure. The algorithm based on a multi-bit Trie tree can be implemented by storing a multi-bit Trie tree structure at each pipeline stage of the pipeline structure.
FIG. 3 is a schematic diagram showing that the multi-bit Trie tree in FIG. 2 is placed in a 5-stage pipeline structure. As shown in FIG. 3, the first 3 stages (stage1 to stage3) in the pipeline structure save 3 layers of Trie nodes, the fourth stage (stage4) saves Prefix nodes corresponding to the 7 Trie nodes, and the fifth stage (stage5) saves RE Index corresponding to each prefix on each Prefix node. During routing search, a search apparatus based on the foregoing pipeline structure performs search on a to-be-matched IP address (that is, a search key word) stage by stage along the pipeline structure to finally obtain a search result. For example, the search key word is 111 010 011, and a routing search process includes:
At stage1, the T1 is accessed; because a prefix corresponding to the node 1 on the T1 matches any search key word, a pointer of the Prefix node corresponding to the T1 is saved; and the first segment key ‘111’ is extracted from the search key word.
At stage2, the subnodes T2, T3, and T4 of the T1 are separately accessed according to the segment key ‘111’; because the first segment key of the prefix p3 (that is, 111*) corresponding to the node 9 on the T4 is in longest match with the first segment key ‘111’ of the search key word, the saved pointer of the Prefix node is updated with the pointer of the Prefix node corresponding to the T4; and the second segment key ‘010’ is extracted from the search key word.
At stage3, the subnode T7 of the T4 is accessed according to the segment key ‘010’; because the second segment key ‘010’ of the prefix p10 (1110100*) corresponding to the node 21 on the T7 is in longest match with the second segment key ‘010’ of the search key word, the saved pointer of the Prefix node is updated with the pointer of the Prefix node corresponding to the T7; and the third segment key ‘011’ is extracted from the search key word.
At stage4, because the T7 has no Trie subnode, the Prefix node corresponding to the T7 is accessed by using the recently saved pointer of the Prefix node corresponding to the T7; matching is performed between the third segment key of the search key word and the third segment key of each prefix on the Prefix node corresponding to the T7, where the third segment key ‘0*’ of the prefix p10 is in longest match with the third segment key ‘011’ of the search key word; therefore, a location, corresponding to “RE Index of p10” (a next-hop pointer corresponding to the prefix p10), in a RE Index array on the Prefix node corresponding to the T7 is used as an output of stage4.
At stage5, RE Index corresponding to the prefix p10 is obtained according to the location in the RE Index array obtained at stage4.
Finally, next-hop information corresponding to the search key word can be obtained according to the RE Index returned by the search apparatus.
In the existing routing search algorithm implemented based on the multi-bit Trie algorithm, a whole multi-bit Trie tree structure is placed in a search apparatus; therefore, vast memory resources are occupied. In addition, a quantity of stages of a pipeline structure of the search apparatus implemented by hardware corresponds to a quantity of stages of a multi-bit Trie tree; therefore, for IPv6 routing with a long mask, the multi-bit Trie tree usually has many stages, and accordingly the pipeline structure also has many stages, which results in a high search delay and also increases difficulties in hardware implementation.