The Transparent Interconnection of Lots of Links (TRILL) Protocol is usually implemented by devices called routing bridges or Rbridges. The TRILL technology runs at a data link layer (Layer 2), and mainly integrates advantages of a bridge and a router, that is, a link state routing technology is used at the data link layer, and does not interfere with operation of an upper-layer router.
Generally, a packet header of a TRILL packet is shown in FIG. 1, where an egress routing bridge nickname is used to store a nickname of a target routing bridge during unicasting and is used to store a multicast distribution tree nickname during multicasting; an ingress routing bridge nickname is used to store a nickname of a source routing bridge; and a hop count is the number of times that the TRILL packet may be forwarded to a next hop during a propagation process.
Operation, administration and maintenance (OAM) generally includes connectivity detection, error isolation, and fault diagnosis between two nodes, while OAM using the TRILL technology has become a main means to perform connectivity detection, error isolation, and fault diagnosis between two nodes.
In the prior art, currently when OAM of the TRILL tests a multicast path by using a traceroute command, only an entire multicast distribution tree can be tested. As shown in FIG. 2, FIG. 2 is a schematic diagram of testing a multicast path by using traceroute in the prior art. Ovals numbered 1, 2, 3, 4, 5, 6, and 7 in FIG. 2 represent a routing bridge RB1, a routing bridge RB2, a routing bridge RB3, a routing bridge RB4, a routing bridge RB5, a routing bridge RB6, and a routing bridge RB7 respectively. If a multicast distribution tree tree1 using the routing bridge RB2 as a root is established in an entire topology, and for the multicast distribution tree tree1, reference may be made to bold lines in FIG. 2, traceroute may be performed on the tree1 on the source node routing bridge RB1. A specific traceroute process is as follows: The routing bridge RB1 sends a multicast packet of which an ingress routing bridge nickname is RB1 and an egress routing bridge nickname is tree1; the multicast packet includes connectivity detection request information (which may be an echo request message); therefore, the connectivity detection request information is multicast with the packet on a network formed by the routing bridges. When sending the multicast packet, the routing bridge RB1 first sets a hop count in the connectivity detection request information to 1, and increases the hop count gradually each time the routing bridge RB1 sends the connectivity detection request information until the hop count reaches a maximum value. Generally, the connectivity detection request information may be carried in an OAM packet for forwarding. For a packet header of the OAM packet here, reference may be made to FIG. 1. So long as the value of the hop count is large enough, the connectivity detection request information is copied and forwarded as shown by arrows in FIG. 2 with the multicast packet along the tree1. For example, the routing bridge RB1 first forwards the connectivity detection request information to the routing bridge RB7 and the routing bridge RB5 in the distribution tree tree1; after receiving the packet, the routing bridge RB7 and the routing bridge RB5 deducts the hop count in the received connectivity detection request information by 1, and the hop count becomes 0. Because the routing bridge RB7 is a leaf node, the routing bridge RB7 sends connectivity detection reply information (which may be an echo reply message) to the routing bridge RB1, while the routing bridge RB5 sends unreachable information (which may be an error message) to the routing bridge RB1, and stops further forwarding. After receiving the unreachable information returned by the routing bridge RB5, the routing bridge RB1 adds 1 to the hop count, and continues to send a connectivity detection request information of which a hop count is 2 to the routing bridge RB7 and the routing bridge RB5 through multicasting; after receiving the multicast packet, the routing bridge RB5 deducts the hop count by 1, and the hop count becomes 1, and then the routing bridge RB5 continues to forward the multicast packet to the routing bridge RB2 along the tree1; after receiving the multicast packet, the routing bridge RB2 deducts the hop count by 1, and the hop count becomes 0, and then the routing bridge RB2 sends unreachable information to the routing bridge RB1. The rest is deduced by analogy. Each time the routing bridge RB1 receives latest unreachable information, it increases the hop count until the hop count reaches a maximum value. As the hop count increases, the routing bridge RB1 obtains unreachable information sent by nodes which are one hop, two hops, three hops . . . n hops away from the routing bridge RB1 in sequence, and the unreachable information carries information about the nodes that send the unreachable information. In addition, during the foregoing multicast process, the RB1 receives connectivity detection reply information sent by the RB7, the RB3, and the RB4 respectively. In this way, the RB1 may depict a structure of the entire tree according to the received unreachable information.
The prior art has at least the following problems: when performing traceroute on a multicast path, current OAM of the TRILL can only trace all nodes in an entire multicast distribution tree, and cannot perform connectivity check on a designated node in the multicast distribution tree, with low pertinence and low efficiency; and a source node receives unreachable information from all the nodes in the entire multicast distribution tree, which makes it relatively difficult for the source node to identify a path.