1. Field of the Invention
The present invention relates to a packet transmission device, and more particularly, relates to a packet transmission device that enables to switch Pseudo Wires (PW) in a Multi-protocol Label Switching (MPLS) network at a high speed.
2. Description of the Related Art
In recent years, service such as Pseudo Wire Emulation Edge to Edge (PWE3) that virtually provides a point-to-point Ethernet® line on a MPLS line has been provided. Moreover, as high-speed and reliable MPLS networks are required, techniques that enable quick fault recovery less than or equal to 50 msec on the MPLS line such as Fast ReRoute (FRR) that reroutes a traffic at a high speed (less than or equal to 50 msec) when a fault occurred on a physical link of a network have been applied. A technique that enables quick fault recovery on a network is described in Japanese Laid-open Patent Publication No. 10-117175. Hereinafter, in order to facilitate understanding of the technique, a fault recovery method by FRR used in known MPLS networks will be described with reference to FIG. 11. Further, a known packet transmission device will be described with reference to FIGS. 12 to 15.
FIG. 11 is a system chart illustrating an example of a known MPLS network configuration and fault recovery method. In FIG. 11, in a PWE3 in the known MPLS network, links established between packet transmission devices (Provider Edges; hereinafter, referred to as PE) have several Label Switch Paths (LSPs) respectively. In a fault recovery by FRR, protection paths are provided to avoid points (link, node) where faults can occur on protection target work paths.
In the example shown in FIG. 11, paths established from a PE 50 through a PE 51 to a PE 52 are the protection target work paths. In preparation for an occurrence of a fault between the PE 50 and the PE 51, protection paths from the PE 50 through a PE 53 to the PE 52 are provided. The number of the links and paths (LSPs) on the protection paths are the same as those on the work paths.
If a fault occurs between the PE 50 and the PE 51, the PE 50 that is disposed at a branch point performs a switching operation. The PE 52 that is disposed at a point where the paths merge again receives a label for a work path and a label for a protection path. Accordingly, the fault recovery is achieved when the PE 50 simply switches the label for the work path to that for the protection path.
In such a case, conventionally, the PE disposed at the branch point switches all of the LSPs that constitute the link when a fault occurs by the configurations and operation shown in FIGS. 12 to 15.
FIG. 12 is a block diagram illustrating an example of a configuration of the known packet transmission device (PE) shown in FIG. 11. As shown in FIG. 12, the known PE includes a packet transfer processing part 60 and control part 70 as a basic configuration.
The packet transfer processing part 60 includes an input packet interface (input packet IF) 61, a flow identification part 62, a transfer destination control part 63, an output packet interface (output packet IF) 64, and a table memory 65 that stores a transfer destination information table 65a. 
In the packet transfer processing part 60, a flow number of a packet that is taken by the input packet IF 61 from an input path is identified in the flow identification part 62. According to the flow number, the transfer destination control part 63 acquires transfer destination information from the transfer destination information table 65a, and determines an output path of the output packet IF 64.
A link/node status monitoring part 71 provided in the control part 70 constantly monitors status of link/node shown by the output packet IF 64 of the packet transfer processing part 60, and reflects the acquired link/node status on the transfer destination information table 65a provided in the packet transfer processing part 60.
Hereinafter, detailed description will be made. FIG. 13 illustrates an example of a configuration of the transfer destination information table shown in FIG. 12. As shown in FIG. 13, the transfer destination information table 65a stores flow number #1 to flow number #M as addresses, and also stores transfer destination paths respectively. The number #1 to the flow number #M are flow numbers of the packet identified by the flow identification part 62.
The link/node status monitoring part 71 provided in the control part 70, by following a procedure shown in FIG. 14, updates the transfer destination information table 65a provided in the packet transfer processing part 60. FIG. 14 is a flowchart illustrating an operation of the link/node status monitoring part provided in the control part shown in FIG. 12. In FIG. 14, a monitoring method of link status is described. It is noted that monitoring of node status is also performed in a similar procedure and a similar processing. In FIG. 14, steps that show the processing procedure are simply abbreviated as “ST”. This is applied in each flowchart shown below.
In FIG. 14, in ST 41, the link/node status monitoring part 71 stands by coming of timing for monitoring the output packet IF 64. At a timing for monitoring the output packet IF 64, (ST 41: Yes), the link/node status monitoring part 71 monitors status of the output packet IF 64 and acquires information whether the output packet IF 64 is in active status or in inactive status (ST 42). Then, the link/node status monitoring part 71 accesses to the transfer destination information table 65a, and according to the status of the link (active or inactive), updates all transfer destination paths associated with the link (ST 43). In ST 43, for example, in a case where two paths (LSPs) are established in one link, two transfer destination paths of the paths are overwritten. Then, the process returns to ST 41, and the link/node status monitoring part 71 stands by coming of a next monitoring timing.
FIG. 15 is a view illustrating an operation of the packet transfer processing part shown in FIG. 12. In FIG. 15, for example, the input packet IF 61 takes in a packet of a flow #10 from an input path. Then, the flow identification part 62 identifies a flow of the packet received from the input packet IF 61 and determines a number “10”, and outputs the determined flow number “10” to the transfer control part 63 together with the received packet.
In the above-described transfer destination information table 65a that is overwritten by the link/node status monitoring part 71 provided in the control part 70, a transfer destination path “4000” is stored in the flow #10 (address). Then, the transfer destination control part 63 acquires the transfer destination information #4000 from the transfer destination information table 65a based on the flow number #10 received from the flow identification part 62. Then the transfer destination control part 63 determines an output path of the output packet IF 64 based on the acquired transfer destination information #4000, and instructs the output packet IF 64 to output the packet.
As described above, in the known fault recover method, if a fault occurred in a link or node, the PE performs the switching processing on the all paths (LSPs) in the link or node. In the known MPLS network, only several paths (LSPs) are established in one link. Accordingly, even if a fault occurs in one link or node, it is enough to change transfer destination information of the several paths. Therefore, it is possible to strictly keep the restriction that the fault recover time is to be less than or equal to 50 msec.
However, in a MPLS network (FIG. 16) to be established, if the above-mentioned known fault recover method is applied, it is not possible to strictly keep the restriction that the fault recover time is to be less than or equal to 50 msec.
FIG. 16 is a system chart illustrating a configuration of the MPLS network and an operation in a case where the known fault recover method is applied. As shown in FIG. 16, in the configuration of the MPLS network to be established according to the present invention, similarly to the case in FIG. 11, protection paths of a PE 80 through a PE 83 to a PE 82 are provided with respect to work paths of the PE 80 through a PE 81 to the PE 82. However, the number of the paths (LSPs) constituting links established between a PE and a PE is largely different from that of the known configuration. In a PWE3 in the MPLS network of the present invention, each link includes several thousands of paths (LSPs).
In such a case, in the PE 80 to PE 83 formed in a similar configuration to that of the known PE 50 to PE 53, if a fault recovery is performed, when a fault occurs in a link, transfer destination information of the several thousands of paths has to be changed. Accordingly, it is difficult to strictly keep the restriction that the fault recover time is to be less than or equal to 50 msec.
FIG. 17 is a view illustrating local repair required in the MPLS network. FIG. 18 is a view illustrating global repair required in the MPLS network. In the MPLS network to be established, a PE that is disposed at a branch point between a work path and a protection path is expected, as shown in FIG. 17, to correspond to not only local repair in which only a path status between neighbor PEs is checked and paths are switched, as shown in FIG. 18, but also expected to correspond to global repair in which a path status of foregoing PEs is checked and paths are switched.