The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art or suggestions of the prior art, by inclusion in this section.
A data center network may include a plurality of compute nodes which may communicate and exchange data with each other via a network, fabric, interconnections, or the like. The plurality of compute nodes may comprise processor nodes, storage nodes, input/output (I/O) nodes, and the like, each configured to perform one or more particular functions or particular types of functions. In some embodiments, a compute node may run multiple processes and may implement a plurality of processor cores. Incoming data packets may be categorized or assigned to particular processes to be performed by the compute node in accordance with the characteristics associated with the respective data packets. The incoming data packets may be placed in one or more buffers until they are ready to be handled by the compute node.
Unfortunately, bottlenecks may occur in moving the incoming data packets to appropriate processing resources at the compute node due to, for example, the quantity, size, and/or type of data packets relative to the processing speed and/or capacity of the processing resources of the compute node. For instance, in some embodiments, a first data packet associated with a first process to be performed may be in a queue (a buffer queue) ahead of a second data packet associated with a second process (different from the first process) to be performed. The processing resources associated with the first process may not be ready to receive the first data packet, while the processing resources associated with the second process may be available to handle the second data packet. However, due to the bottleneck at the processing resources associated with the first process, the first data packet may remain in the queue, thereby preventing subsequent data packets (such as the second data packet) from being delivered to available processing resources.