Flow and congestion control algorithms of TCP are dependent on the round trip time (RTT) between the two parties involved in the communication (client and server). TCP adopts a “slow-start” to discover available bandwidth, forcing a sender to use a slower sending rate during the start-up phase of the connection. When a feedback from the other end-point arrives, i.e., after RTT, the sending rate is increased. When the available bandwidth is reached the slow-start ends. The feedback requirement causes the speed of convergence to the available bandwidth to vary with the RTT. Therefore, when the RTT is large, the time it takes for a TCP flow to take full advantage of the actual available bandwidth becomes large as well.
For example, assume a client downloads a 400 Kbit file from a server, over a 10 Mbit/s link, with 40 ms as round trip time. If the flow control algorithm converged to the 10 Mbit/s bandwidth instantaneously, then the file would be downloaded in 40 ms (400 Kbit/10 Mbit). However, the TCP's flow control algorithm slowly increases the connection speed depending on the received Acknowledgement messages during the connection. Assume that the server starts sending the file at 100 Kbit/s and linearly increments by 100 Kbit/s the sending rate until convergence, i.e., the maximum bandwidth, is reached. Considering that each increment happens after an Acknowledgment message is received, and that acknowledgments are sent only upon reception of data, then, the server performs an increment of the sending rate every 40 ms, i.e., every RTT. To send 400 Kbit under this assumption, the server would then take 3 RTT, sending the following amount of data at each transmission: 100 Kbit (transmission rate is 100 Kbit/s)+200 Kbit (transmission rate is 200 Kbit/s)+100 Kbit (transmission rate is 300 Kbit/s). Thus, in the previous case it would take 3 RTT=40 ms×3=120 ms to transfer the 400 Kbit, which corresponds to an average transfer rate of 3.3 Mbit/s. Consider now the case in which the transfer happens over a link with 20 ms as RTT. The server would still require 3 RTT to send the data, but, this time, it would take 3 RTT=20 ms×3=60 ms to transfer the 400 Kbit, which corresponds to an average transfer rate of 6.6 Mbit/s. An important observation is that lowering the RTT between the end-points of a TCP connection makes the flow control algorithm's convergence time smaller.
In most of the cases the RTT cannot be arbitrarily changed, in particular when it is mainly caused by the propagation delay (i.e., the time it takes for a signal to travel from a point to another point). However, dividing the end-to-end connection into a number of segments guarantees that each of the segments has a smaller RTT between the segments' endpoints than the RTT between the connection's end-points. Thus, if each segment end-point runs a TCP flow control algorithm independently from the other segments end-points, then, the convergence time for each segment is lower that the convergence time on the end-to-end path.
To achieve a segmented TCP connection, it is possible to adopt a variable number of TCP proxies on the end-to-end path of the connection. A similar solution has been presented by Ladiwala et al. (Sameer Ladiwala, Ramaswamy Ramaswamy, and Tilman Wolf, Transparent TCP Acceleration, Comput. Commun. 32, 4 (March 2009), 691-702), where network routers are enhanced with the option of executing a TCP proxy, thus, enabling the activation of on-path TCP proxies for a subset of the flows traversing the routers. Notice that in the case of Ladiwala et al., the proxies are completely transparent and the system relies on the routing system to steer the network flows through the correct set of routers (which work as TCP proxy).
A different approach is to use explicit TCP proxies. Here the set of proxy locations is not limited to the TCP routing paths, but at any location of the network TCP proxies can be set up and used to accelerate data transfers. Here for each data transfer the optimal path through the set of potential TCP proxies has to be computed. An example of such approach is presented by Liva et al. (Yong Liva, Yu Gua, Honggang Zhanga, Weibo Gongb and Don Towsleya, Application Level Relay for High-Bandwidth Data Transport, September 2004). In that work, the throughput of the path is considered to be the minimum among the throughputs on the individual segments. This assumption simplifies computation of optimal paths (which are then so-called widest paths), but it does not capture well the slow-start phase of TCP. During the slow-start phase—and the majority of TCP connections never leave the slow-start phase—the optimal path is not simply a shortest path or widest path, but is the solution of a bi-criteria optimization problem whose optimal solution depends on the actual size of the data to transfer. Thus, computing the optimal path for a TCP connection is a non-trivial task, but needs to be performed fast enough in order not to become another bottleneck for the data transfer.