A computer network is a geographically distributed collection of interconnected subnetworks, such as local area networks (LAN) that transport data between network nodes. As used herein, a network node is any device adapted to send and/or receive data in the computer network. Thus, in this context, “node” and “device” may be used interchangeably. Data exchanged between network nodes is generally referred to herein as data traffic. The network topology is defined by an arrangement of network nodes that communicate with one another, typically through one or more intermediate nodes, such as routers and switches. In addition to intra-network communications, data also may be exchanged between neighboring (i.e., adjacent) networks. To that end, “edge devices” located at the logical outer-bound of the computer network may be adapted to send and receive inter-network communications. Both inter-network and intra-network communications are typically effected by exchanging discrete packets of data according to predefined protocols. In this context, a protocol consists of a set of rules defining how network nodes interact with each other.
A data packet is generally any packet, frame or cell, that is configured to transport data in a computer network. Each data packet typically comprises “payload” data prepended (“encapsulated”) by at least one network header formatted in accordance with a network communication protocol. The network headers include information that enables network nodes to efficiently route the packet through the computer network. Often, a packet's network headers include a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header as defined by the Transmission Control Protocol/Internet Protocol (TCP/IP) Reference Model. The TCP/IP Reference Model is generally described in more detail in Section 1.4.2 of the reference book entitled Computer Networks, Fourth Edition, by Andrew Tanenbaum, published 2003, which is hereby incorporated by reference as though fully set forth herein.
The data-link header provides information for transmitting the packet over a particular physical link (i.e., a communication medium), such as a point-to-point link, Ethernet link, wireless link, optical link, etc. To that end, the data-link header may specify a pair of “source” and “destination” network interfaces that are connected by the physical link. Each network interface contains the mechanical, electrical and signaling circuitry and logic used to couple a network node to one or more physical links. A network interface is often associated with a hardware-specific address, known as a media access control (MAC) address. Accordingly, the source and destination network interfaces in the data-link header are typically represented as source and destination MAC addresses. The data-link header may also store flow control, frame synchronization and error checking information used to manage data transmissions over the physical link.
The internetwork header provides information defining the packet's logical path through the computer network. Notably, the path may span multiple physical links. The internetwork header may be formatted according to the Internet Protocol (IP), which specifies IP addresses of both a source and destination node at the end points of the logical path. Thus, the packet may “hop” from node to node along its logical path until it reaches the client node assigned to the destination IP address stored in the packet's inter-network header. After each hop, the source and destination MAC addresses in the packet's data-link header may be updated, as necessary. However, the source and destination IP addresses typically remain unchanged as the packet is transferred from link to link in the network.
The transport header provides information for ensuring that the packet is reliably transmitted from the source node to the destination node. The transport header typically includes, among other things, source and destination port numbers that respectively identify particular software applications executing in the source and destination nodes. More specifically, the packet is generated in the source node by the application assigned to the source port number. Then, the packet is forwarded to the destination node and directed to the application assigned to the destination port number. The transport header also may include error-checking information (i.e., a checksum) and other data-flow control information. For instance, in connection-oriented transport protocols such as the Transmission Control Protocol (TCP), the transport header may store sequencing information that indicates the packet's relative position in a transmitted stream of data packets. The TCP protocol is generally described in more detail in the Request for Comments (RFC) 793, entitled Transmission Control Protocol, published September 1981, which publication is publicly available through the Internet Engineering Task Force (IETF) and is expressly incorporated herein by reference as though fully set forth herein.
Multi-Protocol Label Switching/Virtual Private Network Architecture
A virtual private network (VPN) is a collection of network nodes that establish private communications over a shared backbone network. Previously, VPNs were implemented by embedding private leased lines in the shared network. The leased lines (i.e., communication links) were reserved only for network traffic among those network nodes participating in the VPN. Today, the above-described VPN implementation has been mostly replaced by private “virtual circuits” deployed in public networks. Specifically, each virtual circuit defines a logical end-to-end data path between a pair of network nodes participating in the VPN.
A virtual circuit may be established using, for example, conventional layer-2 Frame Relay (FR) or Asynchronous Transfer Mode (ATM) networks. Alternatively, the virtual circuit may “tunnel” data between its logical end points using known layer-2 and/or layer-3 tunneling protocols, such as the Layer-2 Tunneling Protocol (L2TP) or the Generic Routing Encapsulation (GRE) protocol. In this case, one or more tunnel headers are prepended to a data packet to appropriately route the packet along the virtual circuit. The Multi-Protocol Label Switching (MPLS) protocol may be used as a tunneling mechanism for establishing layer-2 virtual circuits or layer-3 network-based VPNs through an IP network.
MPLS enables network nodes to forward packets along predetermined “label switched paths” (LSP). Each LSP defines a logical data path, or virtual circuit, over which a source node can transmit data to a destination node. As used herein, a unidirectional tunnel is a logical data path configured to transmit data traffic in a single direction between network nodes. Thus, a LSP is an example of a unidirectional tunnel in a MPLS-configured network. A data flow is more generally an exchange of data traffic between network nodes, the data traffic having a common set of characteristics, such as the same source and destination IP addresses, source and destination TCP port numbers, and so forth. A data flow may be unidirectional or bidirectional. For instance, a bidirectional data flow may “tunnel” through a MPLS-configured network in the form of opposing unidirectional tunnels established in the network, whereas a unidirectional data flow may require only a single unidirectional tunnel to traverse the network.
It is often necessary for a pair of source and destination nodes to establish more than one LSP between them, i.e., to support multiple unidirectional tunnels. For instance, the source and destination nodes may be configured to transport data for an application that requires opposing unidirectional tunnels—i.e., a first unidirectional tunnel from the source node to the destination node and a second unidirectional tunnel from the destination node to the source node. An example of such an application requiring two-way communications may be a voice telephone call, in which opposing unidirectional tunnels must be established in order to transport voice-over-IP (VoIP) data between the source and destination nodes.
Unlike traditional IP routing, where node-to-node (“next hop”) forwarding decisions are performed based on destination IP addresses, MPLS-configured nodes instead forward data packets based on “label” values (or “tag” values) added to the IP packets. As such, a MPLS-configured node can perform a label-lookup operation to determine a packet's next-hop destination along a LSP. For example, the destination node at the end of the LSP may allocate a VPN label value to identify a data flow's next-hop destination in an adjacent (“neighboring”) routing domain. The destination node may advertise the VPN label value, e.g., in a conventional Multi-Protocol Border Gateway Protocol (MP-BGP) message, to the source node located at the start of the LSP. Thereafter, the source node incorporates the advertised VPN label value into each data packet that it transmits over the LSP to the destination node. The destination node performs VPN-label lookup operations to render inter-domain forwarding determinations for the data packets that it receives.
While the VPN label value may be used by the destination node to identify a next-hop destination at the end of the LSP, next-hop destinations along the LSP may be determined based on locally-allocated interior-gateway protocol (IGP) label values. Specifically, each logical hop along the LSP may be associated with a corresponding IGP label value. For purposes of discussion, assume that the source node communicates data to the destination node in a “downstream” direction, such that every logical hop along the LSP consists of an upstream node that forwards data packets to a neighboring downstream node. Typically, the downstream node allocates an IGP label value and sends the IGP label value to the neighboring upstream node using, e.g., the Label Distribution Protocol or Resource Reservation Protocol. Then, the upstream node incorporates the IGP label value into data packets that it forwards to the downstream node along the LSP. Penultimate hop popping (PHP) is often employed for the LSP's last logical hop, such that an IGP label is not included in data packets sent to the destination node. In this way, the PHP-enabled destination node does not perform IGP label-lookup operations and uses only VPN labels to determine the data packets' next-hop destinations, thereby reducing the number of label-lookup operations that it performs.
A data packet may contain a “stack” of MPLS labels, such as the above-noted IGP and VPN labels. The label stack's top-most label typically determines the packet's next-hop destination. After receiving a data packet, a MPLS-configured node “pops” (removes) the packet's top-most label from the label stack and performs a label-lookup operation to determine the packet's next-hop destination. Then, the node “pushes” (inserts) a new label value associated with the packet's next hop onto the top of the stack and forwards the packet to its next destination. This process is repeated for every logical hop along the LSP until the packet reaches its destination node. The above-described MPLS operation is described in more detail in Chapter 7 of the reference book entitled IP Switching and Routing Essentials, by Stephen Thomas, published 2002, which is hereby incorporated by reference as though fully set forth herein.
Layer-3 network-based VPN services that utilize MPLS technology are often deployed by network service providers for one or more customer sites. These networks are typically said to provide “MPLS/VPN” services. As used herein, a customer site is broadly defined as a routing domain containing at least one customer edge (CE) device coupled to a provider edge (PE) device in the service provider's network (“provider network”). The PE and CE devices are generally intermediate network nodes, such as routers or switches, located at the edge of their respective networks. The PE-CE data links may be established over various physical mediums, such as conventional wire links, optical links, wireless links, etc., and may communicate data formatted using various network communication protocols including ATM, Frame Relay, Ethernet, Fibre Distributed Data Interface (FDDI), etc. In addition, the PE and CE devices may be configured to exchange routing information over their respective PE-CE links in accordance with various interior and exterior gateway protocols. Non-edge devices located within the interior of the MPLS/VPN network are generally referred to as provider (P) devices.
In the traditional MPLS/VPN network architecture, each customer site may participate in one or more different VPNs. Most often, each customer site is associated with a single VPN, and hereinafter the illustrative embodiments will assume a one-to-one correspondence between customer sites and VPNs. For example, customer sites owned or managed by a common administrative entity, such as a corporate enterprise, may be statically assigned to the enterprise's VPN. As such, network nodes situated in the enterprise's various customer sites participate in the same VPN and are therefore permitted to securely communicate with one another via the provider network. This widely-deployed MPLS/VPN architecture is generally described in more detail in Chapters 8-9 of the reference book entitled MPLS and VPN Architecture, Volume 1, by I. Pepelnjak et al., published 2001 and in the IETF publication RFC 2547, entitled BGP/MPLS VPNs, by E. Rosen et al., published March 1999, each of which is hereby incorporated by reference as though fully set forth herein.
Deep Packet Inspection Services
As used herein, an application is any type of network software that may be used to effectuate communications in a computer network. For instance, an application may be a network protocol or other network application that executes on a first network node and is configured to communicate with a similar application executing on a second network node. Examples of applications include, among other things, conventional web-browsing software, multimedia and streaming software, peer-to-peer (P2P) software, authentication-authorization-accounting (AAA) software, VoIP software, network-messaging software, file-transfer software, and so on. Those skilled in the art will appreciate that there exists an almost unlimited number of types of applications within the scope of the present disclosure, far too many to list explicitly. Hereinafter, a subscriber is a user of an application. Thus, a subscriber may be an individual or other entity which uses an application to communicate in the computer network.
It is often desirable to monitor network traffic and impose various application-level policies for optimizing and managing resource usage in a computer network. Accordingly, a network administrator may apply application-level policies that implement predefined rules for controlling network traffic patterns, e.g., based on application and/or subscriber-related information. For instance, a network administrator may select a set of rules for managing the allocation of available bandwidth among various applications or subscribers. Yet other rules may be used to filter certain types of traffic that are known to be unsafe or unauthorized. The administrator also may select rules for controlling quality of service, subscriber billing, subscriber usage, and so forth. In general, the network administrator may implement almost any type of rule-based policy to ensure that the network traffic conforms with a desired pattern of behavior.
Deep packet inspection (DPI) services provide a useful means for implementing application-level policies in a computer network. The DPI services may be configured to analyze the contents of one or more application-level packet headers and, in some cases, selected portions of the packet's payload data as well. By analyzing the packet's application-level headers and/or payload data, the DPI services can implement a selected set of application-level policies consistent with the packet's contents. The DPI services may be configured to apply different application-level policies for different types of data flows. Thus, a data packet first may be classified as belonging to a particular data flow, then the DPI services can select an appropriate set of application-level policies to apply to the packet based on the packet's flow classification. The application-level policies may indicate, for example, whether the data packet should be dropped, modified, forwarded, etc. Alternatively, the policies may collect information, such as packet-related statistics, that enable a system administrator to configure (or reconfigure) aspects of the computer network. Those skilled in the art will appreciate that the DPI services may employ various different types of application-level policies without limitation.
In practice, when a data packet is received at a network node, the node may perform a stateful flow-classification procedure that associates the received data packet with a particular data flow. The procedure is “stateful” in the sense that the network node may store state information associated with one or more known data flows, and then may “match” the received data packet with a known data flow by comparing the packet's contents with the stored state information. The conventional flow-classification procedure is usually performed based on the contents of selected fields in the packet's layer-3 and layer-4 (IP and transport layer) headers. After the packet's data flow has been identified, the network node may determine which set of application-level policies to implement based on the packet's flow classification. Thereafter, the DPI services in the network node apply the appropriate application-level policies to the received data packet.
Most typically, a conventional 5-tuple is used to classify an IP packet's data flow. By way of example, consider the data packet 100 illustrated in FIG. 1. The packet includes a data-link header 110, IP header 120, transport header 130, application header(s) 140, payload data 150 and a cyclic redundancy check (CRC) 160. The data-link, IP and transport headers are conventional packet headers known in the art. The headers 110-130 may encapsulate one or more application headers 140, which store application-specific information related to the payload data 150. For instance, the application headers 140 may contain information that is useful for a particular application to process the payload data 150. The CRC 160 is a data-integrity check value that may be used to verify that the contents of the data packet 100 were not altered in transit.
The conventional 5-tuple 170 may be extracted from the data packet 100, as shown. Specifically, the 5-tuple includes a protocol field 122, source-IP-address field 124, destination-IP-address field 126, source-port field 132 and destination-port field 134. The protocol field 122 contains an identifier corresponding to the data format and/or the transmission format of the payload data 150. The source-IP-address field 124 contains a source IP address that identifies the source node transmitting the data packet. Similarly, the destination-IP-address field 126 contains a destination address identifying the packet's intended destination node. The source-port and destination-port fields 132 and 134 store values, such as standard TCP port numbers, that respectively identify software applications executing in the source and destination nodes. While the fields 122-126 are extracted from the IP header 120, the fields 132-134 are typically extracted from the transport header 130.
In most IP-based networks, the conventional 5-tuple 170 may be used to uniquely associate the data packet 100 with its particular application. This is generally because each IP data flow generated by the application contains the same protocol identifier 122, source and destination IP addresses 124 and 126 and source and destination port numbers 132 and 134. For instance, suppose that the application establishes opposing unidirectional IP data flows between a pair of network nodes N1 and N2. Further, assume that the node N1 executes the application using the protocol identifier “A”, an IP address “B” and port number “C,” and the node N2 executes the same application using the protocol identifier “A,” an IP address “D” and a port number “E.” In this case, a unidirectional IP data flow established by the application from N1 to N2 is associated with the 5-tuple {A, B, C, D, E}. Likewise, a second IP data flow established by the application from N2 to N1 is also associated with the same set of 5-tuple values {A, B, C, D, E}. Notably, the order of individual values in the 5-tuple does not have to be identical for the two IP data flows to “match” the application.
Because data packets containing the same 5-tuple 170 usually can be reliably associated with the same application, conventional 5-tuple flow-classification procedures can be used by DPI services for selecting which application-level policies to apply to received data packets. That is, a data packet first may be classified as belonging to a particular data flow based on the packet's contained 5-tuple. Then, the DPI services can select an appropriate set of application-level policies to apply to the packet based on the packet's 5-tuple flow classification.
Despite the above-noted advantages, the conventional 5-tuple flow-classification technique is generally ineffective in MPLS/VPN networks. Problems arise because multiple data flows may utilize the same set of 5-tuple values through the MPLS/VPN network even though the data flows transport data for different applications. More specifically, the conventional 5-tuple is not necessarily unique among applications in the MPLS/VPN network because it is possible for different VPNs to allocate overlapping IP address ranges, which in turn may result in the same source and destination IP addresses being allocated for use by different applications, i.e., executing in different VPNs. As a result, the conventional 5-tuple flow-classification procedure may inadvertently misclassify some data packets in different VPNs as belonging to the same data flow. This misclassification, in turn, may result in the DPI services applying the wrong set of application-level policies to the misclassified data packets. For this reason, DPI services typically cannot be reliably deployed in MPLS/VPN networks.
Without the benefit of conventional 5-tuple flow-classification, it is very difficult for DPI services to determine the application-level relationships of data flows that establish unidirectional tunnels, or LSPs, in the MPLS/VPN network. First of all, each unidirectional tunnel is typically associated with a different set of MPLS label values, which are locally-allocated by network nodes situated along the tunnel's LSP. Thus, multiple unidirectional tunnels may transport data for the same application, although each of the tunnels utilizes a different set of IGP and VPN label values. Current MPLS/VPN deployments do not include mechanisms for associating (or “binding”) the different sets of locally-allocated MPLS label values with the same VPN or application. Because MPLS label values cannot be easily associated with applications, DPI services presently cannot analyze MPLS label values transported in a unidirectional tunnel in order to determine which application-level policies to apply to the data traffic transported through that tunnel. In short, DPI services currently cannot determine the application-level relationships among unidirectional tunnels in a MPLS/VPN network.
Because of the foregoing difficulties, DPI services are not presently employed in MPLS/VPN networks or in other similar networks in which conventional 5-tuple flow-classification cannot reliably identify application-level relationships among a plurality of unidirectional tunnels. It is therefore generally desirable to provide a technique for deploying DPI services in a MPLS/VPN-configured computer network. The technique should enable the DPI services to apply a set of application-level policies to multiple unidirectional tunnels associated with the same application.