Access to computer networks has become a ubiquitous part of today's computer usage. Whether accessing a Local Area Network (LAN) in an enterprise environment to access shared network resources, or accessing the Internet via the LAN or other access point, it seems users are always logged on to at least one service that is accessed via a computer network. Moreover, the rapid expansion of cloud-based services has led to even further usage of computer networks, and these services are forecast to become ever-more prevalent.
Networking is facilitated by various types of equipment including routers, switches, bridges, gateways, and access points. Large network infrastructure typically includes use of telecommunication-class network elements, including switches and routers made by companies such as Cisco Systems, Juniper Networks, Alcatel Lucent, IBM, and Hewlett-Packard. Such telecom switches are very sophisticated, operating at very-high bandwidths and providing advanced routing functionality as well as supporting different Quality of Service (QoS) levels. Private networks, such as Local area networks (LANs), are most commonly used by businesses and home users. It is also common for many business networks to employ hardware- and/or software-based firewalls and the like.
To facilitate communications between networks and computing devices that access such networks, networks typically include one or more network devices (e.g., a network switch, a network router, etc.) to route communications (i.e., network packets) from one computing device to another based on network flows, which are stored in a flow lookup table. Traditionally, network packet processing (e.g., packet switching) has been performed on dedicated network processors of the network devices.
In recent years, virtualization of computer systems has seen rapid growth, particularly in server deployments and data centers. Under a conventional approach, a server runs a single instance of an operating system directly on physical hardware resources, such as the CPU, RAM, storage devices (e.g., hard disk), network controllers, I/O ports, etc. Under one virtualized approach using Virtual Machines (VMs), the physical hardware resources are employed to support corresponding instances of virtual resources, such that multiple VMs may run on the server's physical hardware resources, wherein each virtual machine includes its own CPU allocation, memory allocation, storage devices, network controllers, I/O ports etc. Multiple instances of the same or different operating systems then run on the multiple VMs. Moreover, through use of a virtual machine manager (VMM) or “hypervisor,” the virtual resources can be dynamically allocated while the server is running, enabling VM instances to be added, shut down, or repurposed without requiring the server to be shut down. This provides greater flexibility for server utilization, and better use of server processing resources, especially for multi-core processors and/or multi-processor servers.
Deployment of Software Defined Networking (SDN) and Network Function Virtualization (NFV) has also seen rapid growth in the past few years. Under SDN, the system that makes decisions about where traffic is sent (the control plane) is decoupled for the underlying system that forwards traffic to the selected destination (the data plane). SDN concepts may be employed to facilitate network virtualization, enabling service providers to manage various aspects of their network services via software applications and APIs (Application Program Interfaces). Under NFV, by virtualizing network functions as software applications, network service providers can gain flexibility in network configuration, enabling significant benefits including optimization of available bandwidth, cost savings, and faster time to market for new services. Moreover, SDNs' support of software-based network packet processing has resulted in network infrastructures that support network packet processing being performed on network devices with general purpose processors, thereby increasing scalability, configurability, and flexibility.
Typically, a network packet flow identification library uses a hash table (i.e., the flow lookup table) on which to perform network flow lookups. However, during the hashing process, hash collisions may occur. Different techniques have been developed to address hash collisions, including multi-level hashing and bucketized hash tables with chaining. One such technique, cuckoo-hashing, has emerged as a memory-efficient, high performance hashing scheme for resolving hash collisions during flow lookup table lookups using data plane libraries and network interface controller drivers of a network packet input/output (I/O) engine (e.g., INTEL® Data Plane Development Kit (DPDK)) for fast network packet processing (e.g., flow lookup table lookup, software router/switch functionality, etc.).
In today's high-performance SDN environments, it is necessary to support read-write concurrency of flow tables. In other words, when a core of the general purpose processor is updating the flow lookup table, another core of the general purpose processor should be able to perform a flow lookup in parallel, without needing to lock the flow lookup table. While techniques for supporting single-writer multiple reader concurrency of cuckoo-hash tables have been recently introduced, they do not support concurrent write access. This results in reduced performance for high-workload environments.