The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Transport Control Protocol (TCP) as defined in IETF RFC 793 specifies a transport layer protocol for data networks. TCP generally consists of a set of rules defining how entities interact with each other. The OSI network model defines a series of communication layers, including a transport layer and a network layer. At the transport layer, TCP is a reliable connection-oriented transport protocol. When a process at one network entity wishes to communicate with another entity, it formulates one or more messages and passes them to the upper layer of the TCP communication stack. These messages are passed down through each layer of the stack where they are encapsulated into segments, packets and frames. Each layer also adds information in the form of a header to the messages. The frames are then transmitted over the network links as bits. At the destination entity, the bits are re-assembled and passed up the layers of the destination entity's communication stack. At each layer, the corresponding message headers are stripped off, thereby recovering the original message that is handed to the receiving process.
In a typical implementation of TCP, the receiver of data hold received out-of-order data segments in a re-assembly buffer pending receipt of any missing segments. The receiver sends an acknowledgment (“ACK”) message for each segment that is received and indicating the last valid sequence number. The sender holds non-acknowledged segments in a re-transmission buffer. This process enables a sender to rapidly re-transmit segments that have been lost in transmission, because such segments are not acknowledged.
It is not required or necessary that every segment be explicitly acknowledged. Given the overhead associated with TCP, explicit acknowledgment of every segment, could generate significant extra traffic along the connection. Therefore, a typical TCP implementation provides that the receiver delays sending an ACK until it receives two full data segments thus reducing the traffic along the network. The Nagle Algorithm provides that a sender must not have more than one unacknowledged partial data segment. Any further data from the application is held by the sender until the outstanding segment is acknowledged. Here partial means of size less than the maximum segment size. The purpose of Nagle is to prevent congestion of the network by transmission or re-transmission of multiple partial segments.
A problem arises according to the above TCP implementation. Specifically, when a large upload data transfer is initiated over a HTTPS connection and the TCP connection has the Nagle algorithm implemented, a latency problem arises due to the segmentation logic of the TCP sender and the TCP receiver's delay acknowledgment logic. In one implementation, a deadlock of about 200 ms will occur when an odd number of segments have been transmitted and received and the remaining outstanding segment is a partial segment.
When a TCP sender transmits an odd number of segments and the remaining segment is a partial segment, the TCP sender will refrain from sending the remaining partial packet as required by the Nagle algorithm. Consequently, when the TCP receiver receives an odd number of segments the receiver will refrain from sending an ACK under its default delay acknowledgement logic. Instead, the TCP receiver will start a default delayed ACK timer which typically lasts 200 ms, after which the TCP receiver times out and will acknowledge the last full segment received, thus breaking the deadlock. High latency occurs in such a transfer due to the cumulative effect of several of the 200 ms delays, in some cases one delay per data record transmitted.
This is not desirable in highly interactive environments such as a client/server interaction or a screen based terminal session in which endpoints always communicate data records comprising an odd number of segments and a partial segment. This problem is also found in HTTPS communications in which the sender's SSL or TSL layer generates large records. Nagle can be turned off to prevent the problem. However, due to the significant reduction in extra traffic across the connection it is highly beneficial to enforce Nagle along with the delayed acknowledgement logic.
Additionally, in many cases the TCP receiver does not have control over the TCP sender's logic. For example, in a client/server context the TCP stacks of different senders will exhibit wide variations and are outside the control of any network administrator. Therefore, is it not practical in many cases to turn off Nagle at the sender or implement a TCP sender-based solution.
The deadlock problem and sender-side solutions are described in Rethinking the TCP Nagle Algorithm by J C Mogul & G Minshall, ACM SIGCOMM Computer Communication Review, January 2001. Thus there is a need for receiver-side control of deadlocks arising from the implementation of the Nagle algorithm in a connection.