It is known to parallelize processing of data packets so that more than one data packet can be executed at the same time in a multi-processor system. Multi-processor is useful when a single task takes long to complete and processing data packets serially (e.g., one at a time) would slow down the overall data packet throughput. A data packet is a unit of data transmitted as a discrete entity between devices over a network.
Some multi-processor systems follow the ISO/OSI (International Standards Organization/Open System Interconnection) model, which defines a networking framework for implementing protocols in seven layers. These layers are an application layer, a presentation layer, a session layer, a transport layer, a network layer, a data link layer, and a physical layer. These layers will be described in greater detail in reference to FIG. 2. In such a system, control is passed from one layer to the next during execution of a data packet.
Increasing overall system performance is a goal that most multi-processor systems aim to achieve. Existing multi-processor systems usually parallelize processing of data packets in a network layer of the multi-processor system and divide the data packet processing between a network layer and an application layer. The network and transport layers provide identifiers, known as internet protocol (IP) addresses and ports that uniquely identify an endpoint of a two way communication in what is termed as a connection. The network layer also determines the route from the source device (e.g., a device that sends a request) to the destination device (a device that receives the request). The transport layer performs data flow control and provides recovery mechanisms from packet loss during network transmission. The application layer is the layer at which applications utilize network services in the layers below it to do various user driven functions like file transfer, database transactions and the like. Typically, first, a network layer processes inbound data packets and after processing them signals an application thread (which runs in the application layer) to continue the processing. A thread is a separate stream of packet execution. An application thread at some point picks up the packet and performs processing in a separate application thread. Thus, dividing processing of data packets between the two layers presents scheduling delays and impacts overall system's performance.
Moreover, conventionally, an application layer would execute application threads to process data packets arriving from the network layer. Once the network layer processes a data packet on one processor, it then hands off a data packet to the application layer, which runs separate application threads on another processor. Even though the processors are capable of operating independently, still the processor executing the application thread may need to wait for the results of the execution of the network thread on another processor. As a result, at least two processors were required to process a data packet from a network layer up to the application layer.
Furthermore, when one or more application threads are executed in parallel in a multi-processor system, they often need to access common resources (such as data structures). If more than one thread tries to access the same resource to process a data packet, a lock is needed to prevent two threads from accessing the same resource(s). Implementing locks, however, can be burdensome and expensive and can slow down performance of the system.
Accordingly, what is needed is a mechanism that can increase overall system's performance by eliminating shortcomings of dividing data packet processing between network processing and application processing without locking requisite resources.