As it is generally known, Operations Administration and Management (OAM) is a standard term referring to tools used to monitor and troubleshoot a network. OAM for Ethernet networks is being standardized in IEEE 802.1ag under the name “Connectivity Fault Management” (CFM), and in ITU-T SG13 under the name “OAM Functions and Mechanisms for Ethernet based networks”.
Ethernet CFM defines a number of proactive and diagnostic fault localization procedures. CFM operation involves two key elements: Maintenance End Points (MEPs) and Maintenance Intermediate Points (MIPs). MEPs and MIPs are software (or potentially hardware) entities operating within a networking device, such as a data switch, router, or other type of device. MIPs and MEPs can be implemented per networking device, or per communication port within a networking device. CFM requires that MEPs initiate CFM messages and respond to CFM messages. CFM also requires that MIPs receive CFM messages, and respond back to the originating MEP. In the present disclosure, for purposes of explanation, the term “Maintenance Point” will sometimes be used to refer to both MEPs and MIPs.
CFM includes Connectivity Check (CC), Loopback, and Linktrace mechanisms. CFM Loopback and Linktrace messages are used for reactive end-to-end fault management. Proactive connectivity verification is provided by CFM Connectivity Check messages. A Loopback message helps identify a precise fault location along a given Maintenance Association (MA), which is a logical connection between any two MEPs. For example, during a fault isolation operation, a Loopback message may be issued by an MEP to a configured MIP or another MEP. If the MEP or MIP is located in front of the fault, it responds with a Loopback reply. If the MIP is behind the fault it will not respond. For CFM Loopback to work in such a case, the sending MEP must know the MAC (Media Access Control) address of the destination MEP or MIP. This may be accomplished by examining the MEP database, or by any other means of identifying the MAC address of the remote MEP or MIPS port.
The CFM Linktrace message is used by one MEP to trace the path to another MEP or MIP in the same domain. All intermediate MIPs respond back to the originating MEP with a Linktrace reply. After decreasing a time to live (TTL) count by one, intermediate MIPs also forward the Linktrace message until the destination MIP/MEP is reached. If the destination is an MEP, every MIP along a given MA is required to issue a response to the originating MEP. As a result, the originating MEP can determine the MAC address of all MIPs along the MA, and their precise location with respect to the originating MEP. CFM Linktrace frames include a multicast MAC address as a destination address, and include additional TLV (Type Length Value) encoded data indicating the specific target MEP or MIP MAC address. Linktrace frames use the multicast MAC address to reach the next bridge hop along an MA towards the target MEP or MIP specifically indicated in the TLV encoded data. Only the MIPs that lie between the originating MEP and the target MEP, as specified in the TLV data, must respond. Linktrace frames are required to be terminated and regenerated by each bridge along an MA, and processed hop by hop by bridge software.
CFM Connectivity Check (CC) messages are periodic “hello” messages transmitted at a predetermined rate by an MEP within a maintenance association. CC messages are not directed towards any specific MEP. Instead, they are multicast on a regular basis. All MIPs and MEPs in the maintenance domain receive the CC message, but do not respond to it. The receiving MEPs and MIPs use the received CC message to build a connectivity database having entities of the format [MEP DA (“Destination Address”), Port] for each MEP from which a CC message is received. When an MEP receives a CC message, it updates the database, and knows as a result that the maintenance association (MA) with the sender is functional, including all intermediate MIPs. MEPs are configured to expect a predetermined set of MEP SAs. Accordingly, an MEP can compare received CC messages with the expected set and report related failures.
As an example of CC message operation, when there is a failure, such as a link or a fabric failure, there will be a loss of CC frames detected by one or more MEPs. Loss of a CC could be due to link failure, fabric failure, or mis-configuration between two MEPs. If an MEP fails to receive expected CC messages, it issues a trap to the NMS system.
While CFM provides many advantages, the cost of providing CFM functionality may be prohibitive in low cost devices. For example, maintaining the connectivity status of potentially thousands of MEPs based on received CC frame processing may be unfeasible in systems with limited processor and memory capacity. This limitation is apparent in many specific types of networking devices which may be referred to as Customer Premises Equipment (CPE). As it is generally known, CPE devices are communications equipment residing on the customer's premises. As CFM becomes more widely adopted, the expectation will be that its advantages should extend to CPE devices as well as devices located within the network core or backbone. In particular, it would be advantageous to provide CFM support for the data plane between networking devices in a network core and CPE devices attached to those networking devices.
For the above reasons and others, it would be desirable to have a new system for providing CFM support that effectively extends the advantages of CFM operation out from a network core to CPE devices. The new system should advantageously avoid requiring large amounts of CPU and/or memory capacity in the CPE devices in order to support CFM operation.