The use of packet-based networking has been growing over time and the growth in traffic demands is increasingly being met by introducing ever larger monolithic routers. However, this model is approaching its technologic and economic limits. It is more and more difficult to fulfill the increasing demand for bandwidth with traditional router designs, and with the emergence of low cost Commercial Off-The-Shelf hardware, router vendors also have difficulty justifying higher costs for the same performance. At the same time, the demands on the routing and switching control plane in the access and aggregation networks are becoming more complex. Operators want the ability to customize packet delivery to handle specific kinds of traffic flows without the detailed low-level configuration typical of today's networks.
These trends suggest a different approach to the network architecture, in which the control plane logic is handled by a centralized server and the forwarding plane consists of simplified switching elements “programmed” by the centralized controller. Software Defined Networking (SDN) is a new network architecture that introduces programmability, centralized intelligence and abstractions from the underlying network infrastructure.
OpenFlow is an open standard protocol between the control and forwarding planes used in SDN applications. As shown in FIG. 1, in this model a control platform 100, running on one or more servers 102, 104 in the network, manages a set of switches 108a-108e having only basic forwarding capabilities through a network operating system (OS) 106. The control platform 100 collects information from the switches 108a-108e and operator configuration and then computes and distributes the forwarding rules to them. A logically centralized controller can more easily coordinate the state among the various switching platforms and provides a flexible programmatic interface to build various new protocols and management applications. This separation significantly simplifies modifications to the network control logic (as it is centralized), enables the data and control planes to evolve and scale independently, and potentially decreases the cost of the forwarding plane elements.
OpenFlow was initially designed for Ethernet-based forwarding engines, with internal flow-tables and a standardized interface to add and/or remove flow entries. An example OpenFlow switch 110 is illustrated in FIG. 2 as consisting of three major components: the flow tables 112a-112x, a secure channel to the control process 114, and the OpenFlow protocol 116.
The flow tables 112a-112x specify how the switch 110 should process packets, with an entry of actions associated with each flow. Packets can be pipelined through the flow tables 112a-112x in a specified order. The Secure Channel 114 connects the switch to a remote control process, e.g. a controller 118, for communications and packet forwarding between the controller 118 and the switch 110. The OpenFlow protocol 116 provides an open and standard method for an OpenFlow switch 110 to communicate to a controller 118. A Group Table 120 is illustrated as a special type of table to be used for more complex types of forwarding (broadcast, multicast, failover, hashing, etc.). Packets can first pass through the flow tables 112a-112x and an entry may specify an action to direct the packet to a specific entry in the group table 120.
The controller 118 can communicate with the switching elements, including switch 110, using the OpenFlow protocol 116. The controller 118 hosts simplified network applications which can compute the rules and push the corresponding forwarding instructions to the switching elements. This architecture allows the controller 118 to run on a separate server and control multiple switches, rather than having a distributed control plane with components that run on each switch (e.g. Spanning Tree Protocol (STP), Open Shortest Path First (OSPF), etc.).
Operators use different middlebox services, called inline services, such as deep packet inspection (DPI), logging/metering/charging/advanced charging, Firewall, Intrusion Detection and Prevention (IDP), Network Address Translation (NAT), and others to manage subscriber traffic. An inline service that identifies a packet to potentially alter it by encapsulating it, marking it, or blocking it, is also called a “high-touch” service or function (e.g. DPI). These services have high requirements on throughput and packet inspection capabilities. Inline services can be transparent or non-transparent (e.g. content filtering) to the end users.
Inline services can be hosted on dedicated physical hardware, or in virtual machines (VMs). Lightweight services such as NAT can potentially be incorporated into the switching domain in order to minimize the number of hops the traffic is subjected to.
Service chaining is required if the traffic needs to go through more than one inline services. If more than one chain of services is possible, then the operator needs to configure the networking infrastructure to direct the right traffic through the right inline service path. In this description, “traffic steering” refers to leading the traffic through the right inline service path. We say that an inline service is transparent when a service is not explicitly addressed by the user. Therefore the end user might not be aware that its traffic may traverse a series of network services. It is assumed that the service will not modify the L2/L3 packet headers.
Conventional solutions provide service chaining of a single instance of a physical hardware box for a given inline service. Current solutions use private mechanisms or error-prone manual configuration of the network (for example virtual local area network (VLAN), policy-based routing (PBR), etc.), hence forcing vendor lock-in to the operator. The use of vendor-specific mechanisms also means that the inline services need to be supplied by the vendor of the chaining infrastructure. The feature velocity that operators often desire can be greatly reduced. Operators are looking for an infrastructure where they can mix and match different components (both software and hardware) from numerous vendors in their network. Using open interfaces also has the benefit of lowering operating expenses by easing the configuration of the network and its nodes.
Therefore, it would be desirable to provide a system and method that obviate or mitigate the above described problems.