In a typical cloud data center environment, there is a large collection of interconnected servers that provide computing and/or storage capacity to run various applications. For example, a data center may comprise a facility that hosts applications and services for subscribers, i.e., customers of data center. The data center may, for example, host all of the infrastructure equipment, such as networking and storage systems, redundant power supplies, and environmental controls. In a typical data center, clusters of storage systems and application servers are interconnected via high-speed switch fabric provided by one or more tiers of physical network switches and routers. More sophisticated data centers provide infrastructure spread throughout the world with subscriber support equipment located in various physical hosting facilities. Data centers tend to use switching components that switch packets conforming to packet-based communication protocols such as Transmission Control Protocol over Internet Protocol (TCP/IP).
Network devices that use TCP/IP to exchange packets may use large segment offload (LSO) to offload, to a co-processor of the central processing unit (CPU) that otherwise executes the TCP/IP stack, the segmentation of relatively large data chunks into smaller segments to be transported via a network. Network interface cards (NICs) have been developed to enable the offloading of TCP segmentation by the CPU of a network device to logic executing on a NIC that sends and receives packets for the network device. That is, rather than the CPU performing TCP segmentation, the CPU may direct the NIC to perform TCP segmentation of a large TCP segment (or some other chunk of data) in accordance with a template provided by the CPU executing the network stack. This is a variant of LSO and is referred to as TCP segmentation offload or transport segmentation offload (TSO). Other terms include Generic Segmentation Offload (GSO).