When distributing data via TCP on the Internet, it is necessary to consider changes and variations in network conditions due to various factors including delay, jitter, and packet loss. Further, as for a receiving terminal, it is necessary to assume wide-ranging support for terminals having high capability and low capability. Conventional, general TCP stacks have made possible best-effort data distribution using transmission control parameters by presuming a personal computer (PC) as a communication terminal.
With the above-described TCP/IP communication realized by software, because protocol processing is performed as software processing under the control of OS, a terminal's overall CPU processing load increases and protocol processing is hindered when other prioritized processing are performed, sometimes making it difficult to ensure real-time processing in data transmission of an application being connected. To avoid this kind of phenomenon, as shown in FIG. 1, in recent years, a terminal that realizes protocol processing in the TCP/IP communication using hardware (also referred to as network processing unit, NPU) appeared.
FIG. 1 shows a configuration of communication terminals using the conventional TCP/IP.
As shown in FIG. 1, a communication terminal using TCP/IP is configured with, for example, low-end/middle-end machine 10 such as a personal computer (PC) and a personal digital assistant (PDA), and high-end machine 20 such as a commercial system server.
With low-end/middle-end machine 10, a kernel under the control of OS performs each protocol process sequentially when a user commands an execution of an application. In FIG. 1, a kernel processes all functions of the TCP layer and processes all functions of the IP layer, in transmission processing according to application commands. Then, a network device driver and a network device perform transmission according to the protocol processing in TCP/IP communication. Reception processes follows a reverse order to the transmission processes.
On the other hand, with high-end machine 20, when a user commands execution of an application, a kernel under the control of OS performs each protocol processing sequentially. However, processing of all functions of the TCP layer and processing of all functions of the IP layer are performed in a network processor (NPU) in hardware (HW).
As described above, communication using a TCP/IP is realized by software and introduced in various systems. With high-end machine 20, introduction of a NPU, with which a TCP/IP is processed by hardware, is realized. Because processing to be performed by OS is performed by hardware, it is possible to improve the throughput and reduce the CPU load.
Patent Literature 1 discloses an apparatus for processing TCP/IP by hardware. The apparatus disclosed in Patent Literature 1 provides a protocol stack by hardware that can perform protocol processing for data transmission of an application being connected, apart from the control of OS.
Patent Literature 2 discloses a method for processing data for the TCP connection using an off-load unit. The method disclosed in Patent Literature 2 is provided with software function that issues to hardware a request for transmitting application data by connecting a plurality of segments in a batch, not for transmitting data per TCP segment, as is in a conventional method, and hardware that re-divides the connected segments received in a batch into TCP transmission segments and transmits the segments sequentially to a network. By this means, Patent Literature 2 tries to improve the efficiency of protocol processing in TCP/IP communication.
The maximum number of TCP segments that can be connected at a time is decided by referring to a congestion window size at the time and setting the window size as the upper limit value. A congestion window size is an inner variable retained by the TCP layer, and indicates a segment size that can be transmitted. The size varies constantly depending on changes in network conditions. The above-described method for transmitting connected TCP segments in a batch is referred to as “TCP segmentation offload (TSO).” TSO is a function of performing part or all of TCP segmentation processing by hardware (NIC). Because processing to be performed by OS is partly performed by hardware, improved throughput or reduced CPU load can be expected.
As described above, TCP segment processing may not be performed per TCP segment by software, but a plurality of TCP segments can be processed in a batch as an amount of workload for a time. Because software processes a plurality of TCP segments in a batch, it is possible to reduce the amount of workload (CPU processing load) for software under the control of OS, in transmission.