The Internet transformation being witnessed today is led by growing connectivity needs, mobile services, cloud computing, and big data. These have significantly increased the volume and requirements of traffic processed by network equipment. To meet these increasing demands while maintaining or improving profitability, network operators are constantly seeking ways to reduce costs and enable faster innovation. To this end, network function virtualization (NFV) is a new paradigm embraced by service providers. In NFV, common network functions are realized in software by using commercial-off-the-shelf (COTS) hardware (e.g., general purpose server and storage hardware, or other “generic” hardware) to provide Network Functions (NFs) through software virtualization techniques. These network functions are referred as Virtualized Network Functions (vNFs). The use of vNFs aids both scalability and largely decouples functionality from physical location, which allows the vNFs to be flexibly placed at the different places (e.g., as at customers' premises, at network exchange points, in central offices, data centers), and further enables time of day reuse, easy support for testing production and developmental versions of software, the enhancement of resilience through virtualization, eased resource sharing, reduced power usage, and the ability to implement various vNFs in networks including heterogeneous server hardware.
Similar to virtual machines, the concept of vNFs allows multiple instances of network functions to be run on shared physical infrastructure, desirably in a data center environment. NFV encompasses both middlebox functionalities (e.g., firewall, deep packet inspection (DPI), network address translation (NAT)) as well as core network functions that are more challenging to virtualize (e.g., session border controllers (SBCs), provider edge (PE) routers, broadband remote access server (BRAS), serving/gateway GPRS support node (SGSN/GGSN)). The core network functions handle large-volume, coarse-granularity traffic flows and don't have the same fine grained traffic steering requirements as middleboxes do. As these core vNFs are instantiated, traffic will need to be routed across various intermediate vNFs before reaching its destination (e.g., in a content delivery server or back to the PE and/or customer edge (CE) router). This process is called traffic steering for NFVs, which requires a flexible network configuration to steer traffic through a set of vNFs.
Compared to the traffic steered across traditional middleboxes, the traffic between core vNFs is usually more aggregated, occurs at much higher rates (e.g., 10 s of Gigabits per second (Gbps)) with a fast-growing trend, and usually doesn't have the same fine-grained traffic steering requirements that normal middleboxes do. Thus, existing Layer2/Layer3 packet-based traffic steering has scalability issues in trying to handle the large traffic volume required when steering traffic between core vNFs, and further may require a large energy use.
For example, FIG. 1 illustrates a prior art packet-based switching configuration for steering packets through vNFs. This illustration demonstrates the scalability issue for a pure packet-based approach for traffic steering between core vNFs, where data center network infrastructure has to grow with traffic volume and vNF demands. In FIG. 1, a core switch network element 106 in a packet steering domain 102 is tasked with steering two traffic flows (112, 114) through several vNFs 110A-110D.
The first traffic flow 112 enters the packet steering domain 102 at a rate of 5 Gbps, and must be passed to vNF ‘A’ 110A executing on a computing device of a set of one or more computing devices 108A in a rack and joined through a Top-of-Rack (ToR) switch 104A, and then must be passed to vNF ‘B’ 110B, which also executes on a computing device of a set of computing devices 108B in another rack and joined through another ToR switch 104B. Similarly, the second traffic flow 114 enters the packet steering domain 102 at a rate of 5 Gbps, and must be passed to vNF ‘C’ 110C executing on a computing device of a set of computing devices 108C in a rack and joined through a ToR switch 104C, and then must be passed to vNF ‘D’ 110D, which also executes on a computing device of a set of computing devices 108D in another rack and joined through another ToR switch 104D.
Accordingly, the path for the first traffic flow 112 is to enter the packet steering domain 102 and arrive at the switch network element 106 (at circle ‘1’), go to the ToR switch 104A, go to the vNF ‘A’ 110A, go back to ToR switch 104A and then re-enter the switch network element 106 (at circle ‘2’). Next, this traffic is sent to ToR switch 104B, directed to the vNF ‘B’ 110B, is returned back to the ToR switch 104B, and returned to the switch network element 106 (at circle ‘3’), before exiting the packet steering domain 102. The path for the second traffic flow 114 is similar, as it enters the packet steering domain 102 and arrives at the switch network element 106 (at circle ‘1’), is sent to the ToR switch 104B on its way to the vNF ‘C’ 110C, and thus returns to ToR switch 104B and then re-enters the switch network element 106 (at circle ‘2’). Next, this traffic of the second traffic flow 114 is sent to ToR switch 104D, directed to the vNF ‘D’ 110D, is returned back to the ToR switch 104D, and then returned back to the switch network element 106 (at circle ‘3’), before exiting the packet steering domain 102.
In this example, each of these two 5 Gbps flows (112, 114) transits the switch network element 106 three times, which means that the switch network element 106 must process a packet throughput of 30 Gbps (=5 Gbps per flow*3 occurrences per flow*2 flows). This presents an amplification effect, as the addition of new flows will create an amplified load being placed on the packet steering domain 102.
Further, as the size (i.e., rate) of each flow grows and the number of vNFs each flow must transit grows (a natural occurrence over time as traffic increases and the number of to-be-applied NFs increases), this amplification effect is further magnified. For example, assume the packet steering domain 102 still processes two flows, but that each flow is instead 10 Gbps and each flow must transit three vNFs, each instantiated on server hardware at a different racks. In this case, each flow transits the packet steering domain 102 four times, and now the packet throughput at the packet steering domain 102 is 80 Gbps. For core switches used in data centers, high throughput typically requires more hardware density and power consumption, and thus, there is a strong need for efficient and scalable systems to direct high-volume traffic flows across vNFs that may dynamically change in terms of location and the required sequences of vNF traversal.